Test Report: KVM_Linux_containerd 17830

                    
                      f2d99d5d3acbee63fb92e6e0c0b75bbff35f3ad4:2024-01-08:32615
                    
                

Test fail (1/314)

Order failed test Duration
44 TestAddons/parallel/NvidiaDevicePlugin 8.13
x
+
TestAddons/parallel/NvidiaDevicePlugin (8.13s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9rk2b" [98a3c63e-8291-437f-a7d0-e630bf91f6de] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00519389s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-917645
addons_test.go:955: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-917645: exit status 11 (383.052232ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-01-08T22:58:49Z" level=error msg="stat /run/containerd/runc/k8s.io/a0a6194accc6da92938a6e162813941c7f3082c4d1236fe0923f629f4b4d0684: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:956: failed to disable nvidia-device-plugin: args "out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-917645" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-917645 -n addons-917645
helpers_test.go:244: <<< TestAddons/parallel/NvidiaDevicePlugin FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/NvidiaDevicePlugin]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-917645 logs -n 25: (1.840459582s)
helpers_test.go:252: TestAddons/parallel/NvidiaDevicePlugin logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-445610 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |                     |
	|         | -p download-only-445610              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-445610 | jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |                     |
	|         | -p download-only-445610              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| start   | -o=json --download-only              | download-only-445610 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC |                     |
	|         | -p download-only-445610              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	| delete  | -p download-only-445610              | download-only-445610 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	| delete  | -p download-only-445610              | download-only-445610 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	| start   | --download-only -p                   | binary-mirror-599788 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC |                     |
	|         | binary-mirror-599788                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:33131               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-599788              | binary-mirror-599788 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:54 UTC |
	| addons  | disable dashboard -p                 | addons-917645        | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC |                     |
	|         | addons-917645                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-917645        | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC |                     |
	|         | addons-917645                        |                      |         |         |                     |                     |
	| start   | -p addons-917645 --wait=true         | addons-917645        | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC | 08 Jan 24 22:58 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2          |                      |         |         |                     |                     |
	|         | --container-runtime=containerd       |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-917645        | jenkins | v1.32.0 | 08 Jan 24 22:58 UTC |                     |
	|         | -p addons-917645                     |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:54:57
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:54:57.055052   16328 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:54:57.055219   16328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:57.055230   16328 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:57.055237   16328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:57.055475   16328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
	I0108 22:54:57.056086   16328 out.go:303] Setting JSON to false
	I0108 22:54:57.056904   16328 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2214,"bootTime":1704752283,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:54:57.056964   16328 start.go:138] virtualization: kvm guest
	I0108 22:54:57.058930   16328 out.go:177] * [addons-917645] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:54:57.060455   16328 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 22:54:57.061676   16328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:54:57.060501   16328 notify.go:220] Checking for updates...
	I0108 22:54:57.062916   16328 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	I0108 22:54:57.064276   16328 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	I0108 22:54:57.065630   16328 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 22:54:57.066920   16328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 22:54:57.068300   16328 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:54:57.097842   16328 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 22:54:57.099043   16328 start.go:298] selected driver: kvm2
	I0108 22:54:57.099053   16328 start.go:902] validating driver "kvm2" against <nil>
	I0108 22:54:57.099062   16328 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 22:54:57.099674   16328 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:54:57.099744   16328 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-8357/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:54:57.112797   16328 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:54:57.112839   16328 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 22:54:57.113046   16328 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0108 22:54:57.113112   16328 cni.go:84] Creating CNI manager for ""
	I0108 22:54:57.113134   16328 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0108 22:54:57.113155   16328 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 22:54:57.113171   16328 start_flags.go:323] config:
	{Name:addons-917645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-917645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:54:57.113332   16328 iso.go:125] acquiring lock: {Name:mk34e93ce8d707d1ba4f39937867ad6e31ba9f3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:54:57.114895   16328 out.go:177] * Starting control plane node addons-917645 in cluster addons-917645
	I0108 22:54:57.116060   16328 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 22:54:57.116098   16328 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17830-8357/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0108 22:54:57.116112   16328 cache.go:56] Caching tarball of preloaded images
	I0108 22:54:57.116182   16328 preload.go:174] Found /home/jenkins/minikube-integration/17830-8357/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0108 22:54:57.116194   16328 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0108 22:54:57.116500   16328 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/config.json ...
	I0108 22:54:57.116524   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/config.json: {Name:mk75dbb4f77bff2dd53961ecf35c7580586d775e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:54:57.116680   16328 start.go:365] acquiring machines lock for addons-917645: {Name:mkb1b73981430a9098a03a31078514ab2148d5f2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0108 22:54:57.116747   16328 start.go:369] acquired machines lock for "addons-917645" in 50.635µs
	I0108 22:54:57.116778   16328 start.go:93] Provisioning new machine with config: &{Name:addons-917645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-917645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 22:54:57.116857   16328 start.go:125] createHost starting for "" (driver="kvm2")
	I0108 22:54:57.118376   16328 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0108 22:54:57.118515   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:54:57.118555   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:54:57.131616   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33677
	I0108 22:54:57.131989   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:54:57.132536   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:54:57.132557   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:54:57.132876   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:54:57.133038   16328 main.go:141] libmachine: (addons-917645) Calling .GetMachineName
	I0108 22:54:57.133181   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:54:57.133322   16328 start.go:159] libmachine.API.Create for "addons-917645" (driver="kvm2")
	I0108 22:54:57.133355   16328 client.go:168] LocalClient.Create starting
	I0108 22:54:57.133395   16328 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17830-8357/.minikube/certs/ca.pem
	I0108 22:54:57.258966   16328 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17830-8357/.minikube/certs/cert.pem
	I0108 22:54:57.393198   16328 main.go:141] libmachine: Running pre-create checks...
	I0108 22:54:57.393221   16328 main.go:141] libmachine: (addons-917645) Calling .PreCreateCheck
	I0108 22:54:57.393649   16328 main.go:141] libmachine: (addons-917645) Calling .GetConfigRaw
	I0108 22:54:57.394047   16328 main.go:141] libmachine: Creating machine...
	I0108 22:54:57.394064   16328 main.go:141] libmachine: (addons-917645) Calling .Create
	I0108 22:54:57.394193   16328 main.go:141] libmachine: (addons-917645) Creating KVM machine...
	I0108 22:54:57.395470   16328 main.go:141] libmachine: (addons-917645) DBG | found existing default KVM network
	I0108 22:54:57.396169   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:54:57.396036   16351 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000147900}
	I0108 22:54:57.401448   16328 main.go:141] libmachine: (addons-917645) DBG | trying to create private KVM network mk-addons-917645 192.168.39.0/24...
	I0108 22:54:57.465390   16328 main.go:141] libmachine: (addons-917645) DBG | private KVM network mk-addons-917645 192.168.39.0/24 created
	I0108 22:54:57.465481   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:54:57.465359   16351 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17830-8357/.minikube
	I0108 22:54:57.465532   16328 main.go:141] libmachine: (addons-917645) Setting up store path in /home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645 ...
	I0108 22:54:57.465558   16328 main.go:141] libmachine: (addons-917645) Building disk image from file:///home/jenkins/minikube-integration/17830-8357/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 22:54:57.465585   16328 main.go:141] libmachine: (addons-917645) Downloading /home/jenkins/minikube-integration/17830-8357/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17830-8357/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I0108 22:54:57.695617   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:54:57.695518   16351 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa...
	I0108 22:54:57.800534   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:54:57.800427   16351 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/addons-917645.rawdisk...
	I0108 22:54:57.800570   16328 main.go:141] libmachine: (addons-917645) DBG | Writing magic tar header
	I0108 22:54:57.800583   16328 main.go:141] libmachine: (addons-917645) DBG | Writing SSH key tar header
	I0108 22:54:57.800596   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:54:57.800538   16351 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645 ...
	I0108 22:54:57.800734   16328 main.go:141] libmachine: (addons-917645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645
	I0108 22:54:57.800778   16328 main.go:141] libmachine: (addons-917645) Setting executable bit set on /home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645 (perms=drwx------)
	I0108 22:54:57.800800   16328 main.go:141] libmachine: (addons-917645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-8357/.minikube/machines
	I0108 22:54:57.800816   16328 main.go:141] libmachine: (addons-917645) Setting executable bit set on /home/jenkins/minikube-integration/17830-8357/.minikube/machines (perms=drwxr-xr-x)
	I0108 22:54:57.800830   16328 main.go:141] libmachine: (addons-917645) Setting executable bit set on /home/jenkins/minikube-integration/17830-8357/.minikube (perms=drwxr-xr-x)
	I0108 22:54:57.800841   16328 main.go:141] libmachine: (addons-917645) Setting executable bit set on /home/jenkins/minikube-integration/17830-8357 (perms=drwxrwxr-x)
	I0108 22:54:57.800857   16328 main.go:141] libmachine: (addons-917645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-8357/.minikube
	I0108 22:54:57.800872   16328 main.go:141] libmachine: (addons-917645) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0108 22:54:57.800886   16328 main.go:141] libmachine: (addons-917645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17830-8357
	I0108 22:54:57.800898   16328 main.go:141] libmachine: (addons-917645) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0108 22:54:57.800914   16328 main.go:141] libmachine: (addons-917645) Creating domain...
	I0108 22:54:57.800922   16328 main.go:141] libmachine: (addons-917645) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0108 22:54:57.800933   16328 main.go:141] libmachine: (addons-917645) DBG | Checking permissions on dir: /home/jenkins
	I0108 22:54:57.800948   16328 main.go:141] libmachine: (addons-917645) DBG | Checking permissions on dir: /home
	I0108 22:54:57.800965   16328 main.go:141] libmachine: (addons-917645) DBG | Skipping /home - not owner
	I0108 22:54:57.801839   16328 main.go:141] libmachine: (addons-917645) define libvirt domain using xml: 
	I0108 22:54:57.801857   16328 main.go:141] libmachine: (addons-917645) <domain type='kvm'>
	I0108 22:54:57.801869   16328 main.go:141] libmachine: (addons-917645)   <name>addons-917645</name>
	I0108 22:54:57.801880   16328 main.go:141] libmachine: (addons-917645)   <memory unit='MiB'>4000</memory>
	I0108 22:54:57.801892   16328 main.go:141] libmachine: (addons-917645)   <vcpu>2</vcpu>
	I0108 22:54:57.801904   16328 main.go:141] libmachine: (addons-917645)   <features>
	I0108 22:54:57.801917   16328 main.go:141] libmachine: (addons-917645)     <acpi/>
	I0108 22:54:57.801924   16328 main.go:141] libmachine: (addons-917645)     <apic/>
	I0108 22:54:57.801931   16328 main.go:141] libmachine: (addons-917645)     <pae/>
	I0108 22:54:57.801942   16328 main.go:141] libmachine: (addons-917645)     
	I0108 22:54:57.801951   16328 main.go:141] libmachine: (addons-917645)   </features>
	I0108 22:54:57.801957   16328 main.go:141] libmachine: (addons-917645)   <cpu mode='host-passthrough'>
	I0108 22:54:57.801964   16328 main.go:141] libmachine: (addons-917645)   
	I0108 22:54:57.801973   16328 main.go:141] libmachine: (addons-917645)   </cpu>
	I0108 22:54:57.801978   16328 main.go:141] libmachine: (addons-917645)   <os>
	I0108 22:54:57.801986   16328 main.go:141] libmachine: (addons-917645)     <type>hvm</type>
	I0108 22:54:57.802016   16328 main.go:141] libmachine: (addons-917645)     <boot dev='cdrom'/>
	I0108 22:54:57.802033   16328 main.go:141] libmachine: (addons-917645)     <boot dev='hd'/>
	I0108 22:54:57.802042   16328 main.go:141] libmachine: (addons-917645)     <bootmenu enable='no'/>
	I0108 22:54:57.802053   16328 main.go:141] libmachine: (addons-917645)   </os>
	I0108 22:54:57.802061   16328 main.go:141] libmachine: (addons-917645)   <devices>
	I0108 22:54:57.802072   16328 main.go:141] libmachine: (addons-917645)     <disk type='file' device='cdrom'>
	I0108 22:54:57.802091   16328 main.go:141] libmachine: (addons-917645)       <source file='/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/boot2docker.iso'/>
	I0108 22:54:57.802104   16328 main.go:141] libmachine: (addons-917645)       <target dev='hdc' bus='scsi'/>
	I0108 22:54:57.802118   16328 main.go:141] libmachine: (addons-917645)       <readonly/>
	I0108 22:54:57.802131   16328 main.go:141] libmachine: (addons-917645)     </disk>
	I0108 22:54:57.802142   16328 main.go:141] libmachine: (addons-917645)     <disk type='file' device='disk'>
	I0108 22:54:57.802151   16328 main.go:141] libmachine: (addons-917645)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0108 22:54:57.802161   16328 main.go:141] libmachine: (addons-917645)       <source file='/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/addons-917645.rawdisk'/>
	I0108 22:54:57.802169   16328 main.go:141] libmachine: (addons-917645)       <target dev='hda' bus='virtio'/>
	I0108 22:54:57.802178   16328 main.go:141] libmachine: (addons-917645)     </disk>
	I0108 22:54:57.802190   16328 main.go:141] libmachine: (addons-917645)     <interface type='network'>
	I0108 22:54:57.802205   16328 main.go:141] libmachine: (addons-917645)       <source network='mk-addons-917645'/>
	I0108 22:54:57.802218   16328 main.go:141] libmachine: (addons-917645)       <model type='virtio'/>
	I0108 22:54:57.802231   16328 main.go:141] libmachine: (addons-917645)     </interface>
	I0108 22:54:57.802262   16328 main.go:141] libmachine: (addons-917645)     <interface type='network'>
	I0108 22:54:57.802274   16328 main.go:141] libmachine: (addons-917645)       <source network='default'/>
	I0108 22:54:57.802286   16328 main.go:141] libmachine: (addons-917645)       <model type='virtio'/>
	I0108 22:54:57.802300   16328 main.go:141] libmachine: (addons-917645)     </interface>
	I0108 22:54:57.802312   16328 main.go:141] libmachine: (addons-917645)     <serial type='pty'>
	I0108 22:54:57.802326   16328 main.go:141] libmachine: (addons-917645)       <target port='0'/>
	I0108 22:54:57.802340   16328 main.go:141] libmachine: (addons-917645)     </serial>
	I0108 22:54:57.802360   16328 main.go:141] libmachine: (addons-917645)     <console type='pty'>
	I0108 22:54:57.802370   16328 main.go:141] libmachine: (addons-917645)       <target type='serial' port='0'/>
	I0108 22:54:57.802381   16328 main.go:141] libmachine: (addons-917645)     </console>
	I0108 22:54:57.802394   16328 main.go:141] libmachine: (addons-917645)     <rng model='virtio'>
	I0108 22:54:57.802409   16328 main.go:141] libmachine: (addons-917645)       <backend model='random'>/dev/random</backend>
	I0108 22:54:57.802423   16328 main.go:141] libmachine: (addons-917645)     </rng>
	I0108 22:54:57.802436   16328 main.go:141] libmachine: (addons-917645)     
	I0108 22:54:57.802447   16328 main.go:141] libmachine: (addons-917645)     
	I0108 22:54:57.802457   16328 main.go:141] libmachine: (addons-917645)   </devices>
	I0108 22:54:57.802468   16328 main.go:141] libmachine: (addons-917645) </domain>
	I0108 22:54:57.802483   16328 main.go:141] libmachine: (addons-917645) 
	I0108 22:54:57.808076   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:f6:4a:90 in network default
	I0108 22:54:57.808546   16328 main.go:141] libmachine: (addons-917645) Ensuring networks are active...
	I0108 22:54:57.808564   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:54:57.809173   16328 main.go:141] libmachine: (addons-917645) Ensuring network default is active
	I0108 22:54:57.809433   16328 main.go:141] libmachine: (addons-917645) Ensuring network mk-addons-917645 is active
	I0108 22:54:57.809859   16328 main.go:141] libmachine: (addons-917645) Getting domain xml...
	I0108 22:54:57.810464   16328 main.go:141] libmachine: (addons-917645) Creating domain...
	I0108 22:54:59.213419   16328 main.go:141] libmachine: (addons-917645) Waiting to get IP...
	I0108 22:54:59.214031   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:54:59.214396   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:54:59.214426   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:54:59.214368   16351 retry.go:31] will retry after 262.242327ms: waiting for machine to come up
	I0108 22:54:59.477883   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:54:59.478279   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:54:59.478347   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:54:59.478261   16351 retry.go:31] will retry after 287.207749ms: waiting for machine to come up
	I0108 22:54:59.766658   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:54:59.767034   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:54:59.767055   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:54:59.767009   16351 retry.go:31] will retry after 330.911606ms: waiting for machine to come up
	I0108 22:55:00.099590   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:00.099954   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:55:00.099981   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:55:00.099918   16351 retry.go:31] will retry after 397.697621ms: waiting for machine to come up
	I0108 22:55:00.499364   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:00.499828   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:55:00.499853   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:55:00.499787   16351 retry.go:31] will retry after 694.276663ms: waiting for machine to come up
	I0108 22:55:01.195604   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:01.195930   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:55:01.195953   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:55:01.195903   16351 retry.go:31] will retry after 906.250487ms: waiting for machine to come up
	I0108 22:55:02.103299   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:02.103685   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:55:02.103711   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:55:02.103641   16351 retry.go:31] will retry after 1.168848932s: waiting for machine to come up
	I0108 22:55:03.274240   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:03.274612   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:55:03.274642   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:55:03.274567   16351 retry.go:31] will retry after 1.017274661s: waiting for machine to come up
	I0108 22:55:04.293655   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:04.294092   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:55:04.294116   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:55:04.294068   16351 retry.go:31] will retry after 1.487291669s: waiting for machine to come up
	I0108 22:55:05.782712   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:05.783089   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:55:05.783120   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:55:05.783046   16351 retry.go:31] will retry after 1.521051599s: waiting for machine to come up
	I0108 22:55:07.305373   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:07.305781   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:55:07.305807   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:55:07.305740   16351 retry.go:31] will retry after 2.200562769s: waiting for machine to come up
	I0108 22:55:09.509149   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:09.509529   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:55:09.509550   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:55:09.509483   16351 retry.go:31] will retry after 2.427462799s: waiting for machine to come up
	I0108 22:55:11.940959   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:11.941347   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:55:11.941374   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:55:11.941301   16351 retry.go:31] will retry after 4.122058291s: waiting for machine to come up
	I0108 22:55:16.067909   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:16.068261   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find current IP address of domain addons-917645 in network mk-addons-917645
	I0108 22:55:16.068284   16328 main.go:141] libmachine: (addons-917645) DBG | I0108 22:55:16.068209   16351 retry.go:31] will retry after 3.447023493s: waiting for machine to come up
	I0108 22:55:19.518364   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:19.518768   16328 main.go:141] libmachine: (addons-917645) Found IP for machine: 192.168.39.75
	I0108 22:55:19.518806   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has current primary IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:19.518813   16328 main.go:141] libmachine: (addons-917645) Reserving static IP address...
	I0108 22:55:19.519122   16328 main.go:141] libmachine: (addons-917645) DBG | unable to find host DHCP lease matching {name: "addons-917645", mac: "52:54:00:8d:f2:5b", ip: "192.168.39.75"} in network mk-addons-917645
	I0108 22:55:19.585317   16328 main.go:141] libmachine: (addons-917645) Reserved static IP address: 192.168.39.75
	I0108 22:55:19.585340   16328 main.go:141] libmachine: (addons-917645) Waiting for SSH to be available...
	I0108 22:55:19.585350   16328 main.go:141] libmachine: (addons-917645) DBG | Getting to WaitForSSH function...
	I0108 22:55:19.587824   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:19.588161   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:19.588185   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:19.588299   16328 main.go:141] libmachine: (addons-917645) DBG | Using SSH client type: external
	I0108 22:55:19.588321   16328 main.go:141] libmachine: (addons-917645) DBG | Using SSH private key: /home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa (-rw-------)
	I0108 22:55:19.588354   16328 main.go:141] libmachine: (addons-917645) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.75 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0108 22:55:19.588378   16328 main.go:141] libmachine: (addons-917645) DBG | About to run SSH command:
	I0108 22:55:19.588394   16328 main.go:141] libmachine: (addons-917645) DBG | exit 0
	I0108 22:55:19.684041   16328 main.go:141] libmachine: (addons-917645) DBG | SSH cmd err, output: <nil>: 
	I0108 22:55:19.684308   16328 main.go:141] libmachine: (addons-917645) KVM machine creation complete!
	I0108 22:55:19.684611   16328 main.go:141] libmachine: (addons-917645) Calling .GetConfigRaw
	I0108 22:55:19.685128   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:19.685322   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:19.685481   16328 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0108 22:55:19.685495   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:19.686583   16328 main.go:141] libmachine: Detecting operating system of created instance...
	I0108 22:55:19.686601   16328 main.go:141] libmachine: Waiting for SSH to be available...
	I0108 22:55:19.686612   16328 main.go:141] libmachine: Getting to WaitForSSH function...
	I0108 22:55:19.686621   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:19.688770   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:19.689102   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:19.689124   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:19.689238   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:19.689393   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:19.689543   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:19.689659   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:19.689814   16328 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:19.690132   16328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0108 22:55:19.690144   16328 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0108 22:55:19.807625   16328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:55:19.807649   16328 main.go:141] libmachine: Detecting the provisioner...
	I0108 22:55:19.807660   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:19.810401   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:19.810782   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:19.810817   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:19.810945   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:19.811133   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:19.811301   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:19.811446   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:19.811600   16328 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:19.811907   16328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0108 22:55:19.811918   16328 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0108 22:55:19.929173   16328 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0108 22:55:19.929257   16328 main.go:141] libmachine: found compatible host: buildroot
	I0108 22:55:19.929272   16328 main.go:141] libmachine: Provisioning with buildroot...
	I0108 22:55:19.929284   16328 main.go:141] libmachine: (addons-917645) Calling .GetMachineName
	I0108 22:55:19.929522   16328 buildroot.go:166] provisioning hostname "addons-917645"
	I0108 22:55:19.929543   16328 main.go:141] libmachine: (addons-917645) Calling .GetMachineName
	I0108 22:55:19.929722   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:19.932128   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:19.932497   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:19.932522   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:19.932702   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:19.932842   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:19.932963   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:19.933047   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:19.933148   16328 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:19.933448   16328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0108 22:55:19.933461   16328 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-917645 && echo "addons-917645" | sudo tee /etc/hostname
	I0108 22:55:20.060575   16328 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-917645
	
	I0108 22:55:20.060603   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:20.063213   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.063521   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:20.063548   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.063686   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:20.063828   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:20.063935   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:20.064000   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:20.064106   16328 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:20.064478   16328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0108 22:55:20.064497   16328 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-917645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-917645/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-917645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0108 22:55:20.196524   16328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0108 22:55:20.196555   16328 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17830-8357/.minikube CaCertPath:/home/jenkins/minikube-integration/17830-8357/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17830-8357/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17830-8357/.minikube}
	I0108 22:55:20.196576   16328 buildroot.go:174] setting up certificates
	I0108 22:55:20.196589   16328 provision.go:83] configureAuth start
	I0108 22:55:20.196600   16328 main.go:141] libmachine: (addons-917645) Calling .GetMachineName
	I0108 22:55:20.196871   16328 main.go:141] libmachine: (addons-917645) Calling .GetIP
	I0108 22:55:20.199251   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.199603   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:20.199628   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.199763   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:20.201986   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.202296   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:20.202324   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.202432   16328 provision.go:138] copyHostCerts
	I0108 22:55:20.202496   16328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-8357/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17830-8357/.minikube/key.pem (1675 bytes)
	I0108 22:55:20.202625   16328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-8357/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17830-8357/.minikube/ca.pem (1078 bytes)
	I0108 22:55:20.202681   16328 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17830-8357/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17830-8357/.minikube/cert.pem (1123 bytes)
	I0108 22:55:20.202726   16328 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17830-8357/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17830-8357/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17830-8357/.minikube/certs/ca-key.pem org=jenkins.addons-917645 san=[192.168.39.75 192.168.39.75 localhost 127.0.0.1 minikube addons-917645]
	I0108 22:55:20.317170   16328 provision.go:172] copyRemoteCerts
	I0108 22:55:20.317226   16328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0108 22:55:20.317251   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:20.319791   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.320105   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:20.320128   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.320332   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:20.320520   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:20.320663   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:20.320791   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:20.409186   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0108 22:55:20.432608   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0108 22:55:20.455317   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0108 22:55:20.477679   16328 provision.go:86] duration metric: configureAuth took 281.0757ms
	I0108 22:55:20.477705   16328 buildroot.go:189] setting minikube options for container-runtime
	I0108 22:55:20.477884   16328 config.go:182] Loaded profile config "addons-917645": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 22:55:20.477914   16328 main.go:141] libmachine: Checking connection to Docker...
	I0108 22:55:20.477929   16328 main.go:141] libmachine: (addons-917645) Calling .GetURL
	I0108 22:55:20.479134   16328 main.go:141] libmachine: (addons-917645) DBG | Using libvirt version 6000000
	I0108 22:55:20.481326   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.481644   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:20.481669   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.481859   16328 main.go:141] libmachine: Docker is up and running!
	I0108 22:55:20.481874   16328 main.go:141] libmachine: Reticulating splines...
	I0108 22:55:20.481883   16328 client.go:171] LocalClient.Create took 23.348515633s
	I0108 22:55:20.481907   16328 start.go:167] duration metric: libmachine.API.Create for "addons-917645" took 23.348584856s
	I0108 22:55:20.481921   16328 start.go:300] post-start starting for "addons-917645" (driver="kvm2")
	I0108 22:55:20.481935   16328 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0108 22:55:20.481958   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:20.482188   16328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0108 22:55:20.482216   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:20.484000   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.484336   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:20.484363   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.484457   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:20.484620   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:20.484760   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:20.484889   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:20.573577   16328 ssh_runner.go:195] Run: cat /etc/os-release
	I0108 22:55:20.577886   16328 info.go:137] Remote host: Buildroot 2021.02.12
	I0108 22:55:20.577909   16328 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-8357/.minikube/addons for local assets ...
	I0108 22:55:20.577989   16328 filesync.go:126] Scanning /home/jenkins/minikube-integration/17830-8357/.minikube/files for local assets ...
	I0108 22:55:20.578015   16328 start.go:303] post-start completed in 96.087759ms
	I0108 22:55:20.578053   16328 main.go:141] libmachine: (addons-917645) Calling .GetConfigRaw
	I0108 22:55:20.578621   16328 main.go:141] libmachine: (addons-917645) Calling .GetIP
	I0108 22:55:20.581135   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.581429   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:20.581449   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.581680   16328 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/config.json ...
	I0108 22:55:20.581878   16328 start.go:128] duration metric: createHost completed in 23.465009233s
	I0108 22:55:20.581900   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:20.583969   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.584259   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:20.584290   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.584452   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:20.584613   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:20.584763   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:20.584881   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:20.585049   16328 main.go:141] libmachine: Using SSH client type: native
	I0108 22:55:20.585349   16328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a8e0] 0x80d5c0 <nil>  [] 0s} 192.168.39.75 22 <nil> <nil>}
	I0108 22:55:20.585360   16328 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0108 22:55:20.701165   16328 main.go:141] libmachine: SSH cmd err, output: <nil>: 1704754520.678420086
	
	I0108 22:55:20.701187   16328 fix.go:206] guest clock: 1704754520.678420086
	I0108 22:55:20.701195   16328 fix.go:219] Guest: 2024-01-08 22:55:20.678420086 +0000 UTC Remote: 2024-01-08 22:55:20.58189119 +0000 UTC m=+23.576735594 (delta=96.528896ms)
	I0108 22:55:20.701227   16328 fix.go:190] guest clock delta is within tolerance: 96.528896ms
	I0108 22:55:20.701238   16328 start.go:83] releasing machines lock for "addons-917645", held for 23.584481104s
	I0108 22:55:20.701265   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:20.701515   16328 main.go:141] libmachine: (addons-917645) Calling .GetIP
	I0108 22:55:20.704225   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.704543   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:20.704575   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.704694   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:20.705148   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:20.705304   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:20.705403   16328 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0108 22:55:20.705451   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:20.705600   16328 ssh_runner.go:195] Run: cat /version.json
	I0108 22:55:20.705623   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:20.708060   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.708096   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.708393   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:20.708419   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:20.708440   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.708528   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:20.708681   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:20.708775   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:20.708850   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:20.708905   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:20.708958   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:20.709015   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:20.709061   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:20.709105   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:20.929819   16328 ssh_runner.go:195] Run: systemctl --version
	I0108 22:55:20.935654   16328 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0108 22:55:20.941209   16328 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0108 22:55:20.941274   16328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0108 22:55:20.954599   16328 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0108 22:55:20.954612   16328 start.go:475] detecting cgroup driver to use...
	I0108 22:55:20.954657   16328 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0108 22:55:20.987129   16328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0108 22:55:20.998255   16328 docker.go:203] disabling cri-docker service (if available) ...
	I0108 22:55:20.998317   16328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0108 22:55:21.009951   16328 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0108 22:55:21.021105   16328 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0108 22:55:21.124172   16328 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0108 22:55:21.238667   16328 docker.go:219] disabling docker service ...
	I0108 22:55:21.238740   16328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0108 22:55:21.251689   16328 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0108 22:55:21.263333   16328 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0108 22:55:21.358158   16328 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0108 22:55:21.452462   16328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0108 22:55:21.464641   16328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0108 22:55:21.480535   16328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0108 22:55:21.490463   16328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0108 22:55:21.500171   16328 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0108 22:55:21.500216   16328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0108 22:55:21.509886   16328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 22:55:21.519556   16328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0108 22:55:21.529290   16328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0108 22:55:21.538949   16328 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0108 22:55:21.548889   16328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0108 22:55:21.558399   16328 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0108 22:55:21.566865   16328 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0108 22:55:21.566917   16328 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0108 22:55:21.579785   16328 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0108 22:55:21.588560   16328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:55:21.700801   16328 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 22:55:21.730701   16328 start.go:522] Will wait 60s for socket path /run/containerd/containerd.sock
	I0108 22:55:21.730773   16328 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 22:55:21.736382   16328 retry.go:31] will retry after 1.119633899s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0108 22:55:22.857196   16328 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0108 22:55:22.862944   16328 start.go:543] Will wait 60s for crictl version
	I0108 22:55:22.863011   16328 ssh_runner.go:195] Run: which crictl
	I0108 22:55:22.866991   16328 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0108 22:55:22.903055   16328 start.go:559] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.11
	RuntimeApiVersion:  v1
	I0108 22:55:22.903136   16328 ssh_runner.go:195] Run: containerd --version
	I0108 22:55:22.938127   16328 ssh_runner.go:195] Run: containerd --version
	I0108 22:55:22.966225   16328 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.7.11 ...
	I0108 22:55:22.967547   16328 main.go:141] libmachine: (addons-917645) Calling .GetIP
	I0108 22:55:22.970097   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:22.970421   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:22.970439   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:22.970649   16328 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0108 22:55:22.974510   16328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:55:22.985868   16328 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 22:55:22.985910   16328 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:55:23.017081   16328 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0108 22:55:23.017137   16328 ssh_runner.go:195] Run: which lz4
	I0108 22:55:23.020847   16328 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0108 22:55:23.024801   16328 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0108 22:55:23.024825   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (457457495 bytes)
	I0108 22:55:24.991989   16328 containerd.go:547] Took 1.971164 seconds to copy over tarball
	I0108 22:55:24.992068   16328 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0108 22:55:27.833702   16328 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.841597447s)
	I0108 22:55:27.833732   16328 containerd.go:554] Took 2.841722 seconds to extract the tarball
	I0108 22:55:27.833745   16328 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0108 22:55:27.875239   16328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0108 22:55:27.980073   16328 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0108 22:55:28.005230   16328 ssh_runner.go:195] Run: sudo crictl images --output json
	I0108 22:55:28.041816   16328 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.4 registry.k8s.io/kube-controller-manager:v1.28.4 registry.k8s.io/kube-scheduler:v1.28.4 registry.k8s.io/kube-proxy:v1.28.4 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0108 22:55:28.041917   16328 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:55:28.041959   16328 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.4
	I0108 22:55:28.041973   16328 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0108 22:55:28.041932   16328 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.4
	I0108 22:55:28.041964   16328 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0108 22:55:28.041948   16328 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.4
	I0108 22:55:28.041902   16328 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.4
	I0108 22:55:28.041953   16328 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0108 22:55:28.043450   16328 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0108 22:55:28.043493   16328 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.4
	I0108 22:55:28.043513   16328 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.4
	I0108 22:55:28.043493   16328 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0108 22:55:28.043557   16328 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.4
	I0108 22:55:28.043507   16328 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0108 22:55:28.043528   16328 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:55:28.043552   16328 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.4
	I0108 22:55:28.300416   16328 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.28.4" and sha "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"
	I0108 22:55:28.300476   16328 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0108 22:55:28.353912   16328 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.28.4" and sha "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e"
	I0108 22:55:28.353967   16328 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0108 22:55:28.372586   16328 containerd.go:251] Checking existence of image with name "registry.k8s.io/pause:3.9" and sha "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"
	I0108 22:55:28.372644   16328 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0108 22:55:28.386117   16328 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.28.4" and sha "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"
	I0108 22:55:28.386168   16328 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0108 22:55:28.394177   16328 containerd.go:251] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.28.4" and sha "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"
	I0108 22:55:28.394220   16328 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0108 22:55:28.398025   16328 containerd.go:251] Checking existence of image with name "registry.k8s.io/etcd:3.5.9-0" and sha "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9"
	I0108 22:55:28.398067   16328 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0108 22:55:28.399240   16328 containerd.go:251] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.10.1" and sha "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"
	I0108 22:55:28.399285   16328 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0108 22:55:29.453651   16328 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.099661192s)
	I0108 22:55:29.453991   16328 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.081326532s)
	I0108 22:55:29.577636   16328 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.191438959s)
	I0108 22:55:29.741302   16328 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.347061081s)
	I0108 22:55:29.741652   16328 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.343569549s)
	I0108 22:55:29.742470   16328 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.343162221s)
	I0108 22:55:29.934419   16328 containerd.go:251] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0108 22:55:29.934484   16328 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0108 22:55:30.145439   16328 cache_images.go:123] Successfully loaded all cached images
	I0108 22:55:30.145458   16328 cache_images.go:92] LoadImages completed in 2.10361593s
	I0108 22:55:30.145549   16328 ssh_runner.go:195] Run: sudo crictl info
	I0108 22:55:30.181941   16328 cni.go:84] Creating CNI manager for ""
	I0108 22:55:30.181964   16328 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0108 22:55:30.181983   16328 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0108 22:55:30.182000   16328 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.75 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-917645 NodeName:addons-917645 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0108 22:55:30.182108   16328 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.75
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-917645"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.75
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.75"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0108 22:55:30.182184   16328 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-917645 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-917645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0108 22:55:30.182237   16328 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0108 22:55:30.191852   16328 binaries.go:44] Found k8s binaries, skipping transfer
	I0108 22:55:30.191918   16328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0108 22:55:30.200839   16328 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0108 22:55:30.215910   16328 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0108 22:55:30.230789   16328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0108 22:55:30.245563   16328 ssh_runner.go:195] Run: grep 192.168.39.75	control-plane.minikube.internal$ /etc/hosts
	I0108 22:55:30.249054   16328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0108 22:55:30.259830   16328 certs.go:56] Setting up /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645 for IP: 192.168.39.75
	I0108 22:55:30.259857   16328 certs.go:190] acquiring lock for shared ca certs: {Name:mkcdaec3fe1259a9c776b58f46621aa2ae9b6b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:30.259998   16328 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17830-8357/.minikube/ca.key
	I0108 22:55:30.381307   16328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-8357/.minikube/ca.crt ...
	I0108 22:55:30.381332   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/ca.crt: {Name:mkc543affde49e074999dd3ef5b755ee8d623c66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:30.381485   16328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-8357/.minikube/ca.key ...
	I0108 22:55:30.381495   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/ca.key: {Name:mk90ff0e90f8fa2d4406216e87c9b976f04c6481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:30.381557   16328 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17830-8357/.minikube/proxy-client-ca.key
	I0108 22:55:30.688379   16328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-8357/.minikube/proxy-client-ca.crt ...
	I0108 22:55:30.688414   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/proxy-client-ca.crt: {Name:mkfa4539736f9567151a9a99a8ce22d8928217cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:30.688576   16328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-8357/.minikube/proxy-client-ca.key ...
	I0108 22:55:30.688586   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/proxy-client-ca.key: {Name:mk8a64cbfd7c8765f75538d471f2c883c767a31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:30.688711   16328 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.key
	I0108 22:55:30.688724   16328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt with IP's: []
	I0108 22:55:30.908842   16328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt ...
	I0108 22:55:30.908874   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: {Name:mkce33370bb965aeb4daa3ec41f786fdae82a754 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:30.909027   16328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.key ...
	I0108 22:55:30.909036   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.key: {Name:mk2b4520326f451c668994053ac062eda43b43fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:30.909098   16328 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.key.c1c3c514
	I0108 22:55:30.909114   16328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.crt.c1c3c514 with IP's: [192.168.39.75 10.96.0.1 127.0.0.1 10.0.0.1]
	I0108 22:55:31.090220   16328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.crt.c1c3c514 ...
	I0108 22:55:31.090248   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.crt.c1c3c514: {Name:mk40f8ab7569598596bffc501c5d6e940bcbf51e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:31.090392   16328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.key.c1c3c514 ...
	I0108 22:55:31.090405   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.key.c1c3c514: {Name:mk5b305b89b2487a9a779cfdd120e30f1fb3f58e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:31.090467   16328 certs.go:337] copying /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.crt.c1c3c514 -> /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.crt
	I0108 22:55:31.090539   16328 certs.go:341] copying /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.key.c1c3c514 -> /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.key
	I0108 22:55:31.090580   16328 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/proxy-client.key
	I0108 22:55:31.090595   16328 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/proxy-client.crt with IP's: []
	I0108 22:55:31.201347   16328 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/proxy-client.crt ...
	I0108 22:55:31.201374   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/proxy-client.crt: {Name:mk811105db62a1c6b18aee517672de575ee256c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:31.201516   16328 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/proxy-client.key ...
	I0108 22:55:31.201527   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/proxy-client.key: {Name:mk8393761a78d9a4bfd6cb6ee9e653ce28bda5e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:31.201714   16328 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-8357/.minikube/certs/home/jenkins/minikube-integration/17830-8357/.minikube/certs/ca-key.pem (1679 bytes)
	I0108 22:55:31.201754   16328 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-8357/.minikube/certs/home/jenkins/minikube-integration/17830-8357/.minikube/certs/ca.pem (1078 bytes)
	I0108 22:55:31.201780   16328 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-8357/.minikube/certs/home/jenkins/minikube-integration/17830-8357/.minikube/certs/cert.pem (1123 bytes)
	I0108 22:55:31.201804   16328 certs.go:437] found cert: /home/jenkins/minikube-integration/17830-8357/.minikube/certs/home/jenkins/minikube-integration/17830-8357/.minikube/certs/key.pem (1675 bytes)
	I0108 22:55:31.202364   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0108 22:55:31.225295   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0108 22:55:31.246090   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0108 22:55:31.266897   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0108 22:55:31.288647   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0108 22:55:31.309226   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0108 22:55:31.329990   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0108 22:55:31.350369   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0108 22:55:31.371320   16328 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17830-8357/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0108 22:55:31.391837   16328 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0108 22:55:31.406824   16328 ssh_runner.go:195] Run: openssl version
	I0108 22:55:31.412291   16328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0108 22:55:31.422472   16328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:55:31.426807   16328 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan  8 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:55:31.426888   16328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0108 22:55:31.432308   16328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0108 22:55:31.442393   16328 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0108 22:55:31.446282   16328 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0108 22:55:31.446331   16328 kubeadm.go:404] StartCluster: {Name:addons-917645 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-917645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:55:31.446396   16328 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0108 22:55:31.446450   16328 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0108 22:55:31.484198   16328 cri.go:89] found id: ""
	I0108 22:55:31.484261   16328 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0108 22:55:31.493868   16328 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0108 22:55:31.503112   16328 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0108 22:55:31.512119   16328 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0108 22:55:31.512161   16328 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0108 22:55:31.691095   16328 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0108 22:55:42.902611   16328 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0108 22:55:42.902675   16328 kubeadm.go:322] [preflight] Running pre-flight checks
	I0108 22:55:42.902738   16328 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0108 22:55:42.902873   16328 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0108 22:55:42.903026   16328 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0108 22:55:42.903117   16328 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0108 22:55:42.904619   16328 out.go:204]   - Generating certificates and keys ...
	I0108 22:55:42.904719   16328 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0108 22:55:42.904797   16328 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0108 22:55:42.904860   16328 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0108 22:55:42.904910   16328 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0108 22:55:42.904959   16328 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0108 22:55:42.905033   16328 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0108 22:55:42.905124   16328 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0108 22:55:42.905236   16328 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-917645 localhost] and IPs [192.168.39.75 127.0.0.1 ::1]
	I0108 22:55:42.905282   16328 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0108 22:55:42.905378   16328 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-917645 localhost] and IPs [192.168.39.75 127.0.0.1 ::1]
	I0108 22:55:42.905432   16328 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0108 22:55:42.905488   16328 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0108 22:55:42.905526   16328 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0108 22:55:42.905578   16328 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0108 22:55:42.905621   16328 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0108 22:55:42.905665   16328 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0108 22:55:42.905740   16328 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0108 22:55:42.905811   16328 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0108 22:55:42.905877   16328 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0108 22:55:42.905950   16328 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0108 22:55:42.908347   16328 out.go:204]   - Booting up control plane ...
	I0108 22:55:42.908452   16328 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0108 22:55:42.908549   16328 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0108 22:55:42.908646   16328 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0108 22:55:42.908758   16328 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0108 22:55:42.908859   16328 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0108 22:55:42.908903   16328 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0108 22:55:42.909033   16328 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0108 22:55:42.909098   16328 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002443 seconds
	I0108 22:55:42.909183   16328 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0108 22:55:42.909282   16328 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0108 22:55:42.909338   16328 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0108 22:55:42.909494   16328 kubeadm.go:322] [mark-control-plane] Marking the node addons-917645 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0108 22:55:42.909547   16328 kubeadm.go:322] [bootstrap-token] Using token: n1e7ts.sb3q9w94ynrngmse
	I0108 22:55:42.910818   16328 out.go:204]   - Configuring RBAC rules ...
	I0108 22:55:42.910913   16328 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0108 22:55:42.911000   16328 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0108 22:55:42.911113   16328 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0108 22:55:42.911234   16328 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0108 22:55:42.911338   16328 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0108 22:55:42.911422   16328 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0108 22:55:42.911519   16328 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0108 22:55:42.911557   16328 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0108 22:55:42.911595   16328 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0108 22:55:42.911601   16328 kubeadm.go:322] 
	I0108 22:55:42.911662   16328 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0108 22:55:42.911673   16328 kubeadm.go:322] 
	I0108 22:55:42.911733   16328 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0108 22:55:42.911739   16328 kubeadm.go:322] 
	I0108 22:55:42.911778   16328 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0108 22:55:42.911831   16328 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0108 22:55:42.911887   16328 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0108 22:55:42.911894   16328 kubeadm.go:322] 
	I0108 22:55:42.911958   16328 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0108 22:55:42.911969   16328 kubeadm.go:322] 
	I0108 22:55:42.912014   16328 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0108 22:55:42.912019   16328 kubeadm.go:322] 
	I0108 22:55:42.912060   16328 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0108 22:55:42.912120   16328 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0108 22:55:42.912182   16328 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0108 22:55:42.912189   16328 kubeadm.go:322] 
	I0108 22:55:42.912258   16328 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0108 22:55:42.912324   16328 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0108 22:55:42.912330   16328 kubeadm.go:322] 
	I0108 22:55:42.912398   16328 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token n1e7ts.sb3q9w94ynrngmse \
	I0108 22:55:42.912483   16328 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d5a1c3d9be3d121fb55a057eb18127b649e30c3b63ae0beee41daac803e8b8c5 \
	I0108 22:55:42.912508   16328 kubeadm.go:322] 	--control-plane 
	I0108 22:55:42.912514   16328 kubeadm.go:322] 
	I0108 22:55:42.912585   16328 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0108 22:55:42.912592   16328 kubeadm.go:322] 
	I0108 22:55:42.912686   16328 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token n1e7ts.sb3q9w94ynrngmse \
	I0108 22:55:42.912797   16328 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d5a1c3d9be3d121fb55a057eb18127b649e30c3b63ae0beee41daac803e8b8c5 
	I0108 22:55:42.912819   16328 cni.go:84] Creating CNI manager for ""
	I0108 22:55:42.912827   16328 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0108 22:55:42.914232   16328 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0108 22:55:42.915390   16328 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0108 22:55:42.929689   16328 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0108 22:55:42.962612   16328 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0108 22:55:42.962711   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:42.962711   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a minikube.k8s.io/name=addons-917645 minikube.k8s.io/updated_at=2024_01_08T22_55_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:42.993186   16328 ops.go:34] apiserver oom_adj: -16
	I0108 22:55:43.201410   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:43.701707   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:44.202084   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:44.702241   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:45.202383   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:45.701914   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:46.201481   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:46.702255   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:47.201759   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:47.701717   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:48.201528   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:48.701833   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:49.202076   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:49.701612   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:50.201420   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:50.701451   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:51.201539   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:51.702234   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:52.201550   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:52.702340   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:53.201765   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:53.701568   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:54.201676   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:54.701710   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:55.201549   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:55.701417   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:56.202255   16328 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0108 22:55:56.311785   16328 kubeadm.go:1088] duration metric: took 13.349158945s to wait for elevateKubeSystemPrivileges.
	I0108 22:55:56.311819   16328 kubeadm.go:406] StartCluster complete in 24.865491398s
	I0108 22:55:56.311842   16328 settings.go:142] acquiring lock: {Name:mk6934c49fed14252ac333e6d8d3cf7ddd1322e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:56.311959   16328 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17830-8357/kubeconfig
	I0108 22:55:56.312369   16328 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/kubeconfig: {Name:mkd3ac4560bb1fceeae1ff53f57969a695a9f51f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:55:56.312569   16328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0108 22:55:56.312658   16328 addons.go:505] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0108 22:55:56.312758   16328 addons.go:69] Setting ingress-dns=true in profile "addons-917645"
	I0108 22:55:56.312773   16328 addons.go:69] Setting inspektor-gadget=true in profile "addons-917645"
	I0108 22:55:56.312784   16328 addons.go:237] Setting addon ingress-dns=true in "addons-917645"
	I0108 22:55:56.312793   16328 addons.go:237] Setting addon inspektor-gadget=true in "addons-917645"
	I0108 22:55:56.312796   16328 config.go:182] Loaded profile config "addons-917645": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 22:55:56.312813   16328 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-917645"
	I0108 22:55:56.312810   16328 addons.go:69] Setting cloud-spanner=true in profile "addons-917645"
	I0108 22:55:56.312833   16328 addons.go:237] Setting addon cloud-spanner=true in "addons-917645"
	I0108 22:55:56.312843   16328 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-917645"
	I0108 22:55:56.312846   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.312825   16328 addons.go:69] Setting metrics-server=true in profile "addons-917645"
	I0108 22:55:56.312857   16328 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-917645"
	I0108 22:55:56.312759   16328 addons.go:69] Setting yakd=true in profile "addons-917645"
	I0108 22:55:56.312862   16328 addons.go:237] Setting addon csi-hostpath-driver=true in "addons-917645"
	I0108 22:55:56.312868   16328 addons.go:237] Setting addon metrics-server=true in "addons-917645"
	I0108 22:55:56.312874   16328 addons.go:69] Setting gcp-auth=true in profile "addons-917645"
	I0108 22:55:56.312881   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.312890   16328 mustload.go:65] Loading cluster: addons-917645
	I0108 22:55:56.312902   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.312913   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.313060   16328 config.go:182] Loaded profile config "addons-917645": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 22:55:56.312858   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.313289   16328 addons.go:69] Setting helm-tiller=true in profile "addons-917645"
	I0108 22:55:56.313292   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.313300   16328 addons.go:237] Setting addon helm-tiller=true in "addons-917645"
	I0108 22:55:56.313312   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.313313   16328 addons.go:69] Setting ingress=true in profile "addons-917645"
	I0108 22:55:56.313315   16328 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-917645"
	I0108 22:55:56.313341   16328 addons.go:69] Setting storage-provisioner=true in profile "addons-917645"
	I0108 22:55:56.313358   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.312783   16328 addons.go:69] Setting default-storageclass=true in profile "addons-917645"
	I0108 22:55:56.313396   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.313416   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.313281   16328 addons.go:69] Setting volumesnapshots=true in profile "addons-917645"
	I0108 22:55:56.313437   16328 addons.go:237] Setting addon volumesnapshots=true in "addons-917645"
	I0108 22:55:56.313398   16328 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-917645"
	I0108 22:55:56.313362   16328 addons.go:237] Setting addon storage-provisioner=true in "addons-917645"
	I0108 22:55:56.313298   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.313525   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.313345   16328 addons.go:237] Setting addon nvidia-device-plugin=true in "addons-917645"
	I0108 22:55:56.312869   16328 addons.go:237] Setting addon yakd=true in "addons-917645"
	I0108 22:55:56.313385   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.313597   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.313624   16328 addons.go:69] Setting registry=true in profile "addons-917645"
	I0108 22:55:56.313645   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.313659   16328 addons.go:237] Setting addon registry=true in "addons-917645"
	I0108 22:55:56.313635   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.313331   16328 addons.go:237] Setting addon ingress=true in "addons-917645"
	I0108 22:55:56.313791   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.313350   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.313944   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.313974   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.313995   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.313606   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.314035   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.314042   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.314003   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.314088   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.313302   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.314208   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.314042   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.314449   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.314464   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.314466   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.314471   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.314484   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.314494   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.314513   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.314468   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.314600   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.314737   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.314768   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.333540   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0108 22:55:56.333540   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43317
	I0108 22:55:56.334027   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.334157   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I0108 22:55:56.334508   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.334606   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.334629   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.334646   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.335033   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.335116   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.335131   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.335170   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.335187   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.335491   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.335559   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.335781   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.335800   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.336017   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.336044   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.336095   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.336575   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35399
	I0108 22:55:56.340453   16328 addons.go:237] Setting addon default-storageclass=true in "addons-917645"
	I0108 22:55:56.340494   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.340918   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.340946   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.342000   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.342513   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.342531   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.342589   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0108 22:55:56.342776   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.342810   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.343915   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.344565   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.345417   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.348406   16328 addons.go:237] Setting addon storage-provisioner-rancher=true in "addons-917645"
	I0108 22:55:56.348449   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.348873   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.348895   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.353786   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.353809   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.354284   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.354871   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.354894   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.357515   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43053
	I0108 22:55:56.357890   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.359751   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.359784   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.360111   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.361129   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.361161   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.361642   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35691
	I0108 22:55:56.362115   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.362569   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.362588   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.362923   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.363424   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.363469   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.366512   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42761
	I0108 22:55:56.366999   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.367022   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40215
	I0108 22:55:56.367394   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.367468   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.367485   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.368327   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.368393   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.368413   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.368529   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.368783   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.370442   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.372416   16328 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0108 22:55:56.373770   16328 addons.go:429] installing /etc/kubernetes/addons/deployment.yaml
	I0108 22:55:56.373789   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0108 22:55:56.373806   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.372594   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I0108 22:55:56.373240   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.373949   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.374429   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I0108 22:55:56.374817   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.375319   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.375339   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.375406   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.375875   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.375892   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.376266   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.376685   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.376700   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.377285   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.377458   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.377515   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.377912   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.377971   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.379075   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.379281   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.379318   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:55:56.379436   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.379636   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.379693   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.379734   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.383113   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40443
	I0108 22:55:56.383434   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.383894   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.383919   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.384351   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.384855   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.384895   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.386541   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46525
	I0108 22:55:56.386943   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.387435   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.387451   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.387793   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.387957   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.394791   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43977
	I0108 22:55:56.395765   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44851
	I0108 22:55:56.396146   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.396781   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.396807   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.396880   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0108 22:55:56.397338   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.397820   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.397836   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.398209   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.398797   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.398832   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.399043   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36683
	I0108 22:55:56.399256   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.399335   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.399867   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.399905   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.400162   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.400343   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.400355   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.400454   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.402715   16328 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0108 22:55:56.401072   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0108 22:55:56.401205   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.401650   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.403244   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43657
	I0108 22:55:56.404066   16328 addons.go:429] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0108 22:55:56.404088   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0108 22:55:56.403719   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0108 22:55:56.404106   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.404169   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.404733   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.405336   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.405433   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.405495   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.405604   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.405637   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.405960   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.405977   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.406093   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.406103   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.406357   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.406374   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.406432   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.406525   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.406584   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.406747   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.406808   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.406994   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.407572   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.408759   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.409515   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.409669   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.409685   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.411965   16328 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0108 22:55:56.410224   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.410322   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.410866   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.411406   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.413268   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40177
	I0108 22:55:56.413318   16328 addons.go:429] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0108 22:55:56.413337   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0108 22:55:56.413360   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.412163   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.415330   16328 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0108 22:55:56.414309   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.414862   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.414941   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38921
	I0108 22:55:56.415171   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40513
	I0108 22:55:56.416505   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35853
	I0108 22:55:56.416511   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.417252   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.417279   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.417067   16328 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0108 22:55:56.417095   16328 addons.go:429] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0108 22:55:56.419141   16328 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0108 22:55:56.419155   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0108 22:55:56.419159   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0108 22:55:56.419173   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.419178   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.417104   16328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0108 22:55:56.417544   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.417582   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.417719   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.417824   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.418153   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.420214   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.420904   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.422462   16328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0108 22:55:56.422529   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0108 22:55:56.421472   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.421564   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.422045   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.421353   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.423115   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.423830   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.424503   16328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0108 22:55:56.424590   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.424699   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.424801   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.424913   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.424629   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.425176   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.425186   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.425208   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.425212   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.425490   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.426093   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.426124   16328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0108 22:55:56.426231   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.426287   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.426746   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.426755   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.426776   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.426785   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.426792   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.426832   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.428142   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.428175   16328 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0108 22:55:56.426943   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.428450   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.428511   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.428529   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.428537   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.428943   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.429607   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.433171   16328 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0108 22:55:56.429077   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.434408   16328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0108 22:55:56.429566   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42919
	I0108 22:55:56.435639   16328 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0108 22:55:56.429911   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.429935   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.431365   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.431591   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45903
	I0108 22:55:56.431978   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39637
	I0108 22:55:56.432343   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	W0108 22:55:56.429452   16328 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-917645" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0108 22:55:56.433347   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.434771   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.436820   16328 addons.go:429] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0108 22:55:56.436836   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0108 22:55:56.436851   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	E0108 22:55:56.436987   16328 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0108 22:55:56.437009   16328 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.75 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0108 22:55:56.438540   16328 out.go:177] * Verifying Kubernetes components...
	I0108 22:55:56.437334   16328 addons.go:429] installing /etc/kubernetes/addons/storageclass.yaml
	I0108 22:55:56.437392   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.437539   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.438226   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.440043   16328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:55:56.440058   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.440070   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0108 22:55:56.440084   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.440443   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.441403   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.441443   16328 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0108 22:55:56.442640   16328 addons.go:429] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:55:56.442654   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0108 22:55:56.442669   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.441477   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.441227   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.440744   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.442760   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.442783   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.444193   16328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:55:56.442045   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.442099   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.442356   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.442898   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.446868   16328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:55:56.445574   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.445742   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.445744   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.445762   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.446169   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.446379   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.446575   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.446612   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34091
	I0108 22:55:56.447039   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.449447   16328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0108 22:55:56.448208   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.448257   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.448261   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.448278   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.448285   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.448455   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.448530   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.448713   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.449584   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.451434   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.451510   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.453138   16328 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0108 22:55:56.451669   16328 addons.go:429] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:55:56.451699   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.451771   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.452134   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.452800   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35701
	I0108 22:55:56.454888   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0108 22:55:56.454907   16328 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0108 22:55:56.454911   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.454916   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0108 22:55:56.454927   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.454943   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.455019   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.455076   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.455270   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.455443   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.457739   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.459724   16328 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0108 22:55:56.458577   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.458800   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.459132   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.459305   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.462056   16328 out.go:177]   - Using image docker.io/busybox:stable
	I0108 22:55:56.460974   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.461000   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.461053   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.461161   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.463250   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.463272   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.463375   16328 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:55:56.463388   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0108 22:55:56.463400   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.463426   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.463457   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.463543   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.463967   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.467883   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.467941   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.467957   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.467969   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.468140   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.468296   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.468438   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.473048   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:55:56.473072   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:55:56.473491   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.474316   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.474331   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.474646   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.474776   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.476098   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.477884   16328 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0108 22:55:56.479333   16328 out.go:177]   - Using image docker.io/registry:2.8.3
	I0108 22:55:56.480883   16328 addons.go:429] installing /etc/kubernetes/addons/registry-rc.yaml
	I0108 22:55:56.480911   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0108 22:55:56.480943   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.483931   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.484231   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.484254   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.484432   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.484610   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.484777   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.484909   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.490686   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0108 22:55:56.491058   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.491862   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33485
	I0108 22:55:56.492095   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.492120   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.492177   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:55:56.492445   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.492566   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.492595   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:55:56.492617   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:55:56.492885   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:55:56.493048   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:55:56.494151   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.495932   16328 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0108 22:55:56.494774   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:55:56.497266   16328 addons.go:429] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:55:56.497277   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0108 22:55:56.497289   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.498815   16328 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0108 22:55:56.499999   16328 addons.go:429] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:55:56.500013   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0108 22:55:56.500031   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:55:56.499839   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.500192   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.500220   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.500352   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.500552   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.500684   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.500844   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.502468   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.502795   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:55:56.502821   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:55:56.502937   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:55:56.503085   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:55:56.503218   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:55:56.503322   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:55:56.701383   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0108 22:55:56.766346   16328 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0108 22:55:56.767323   16328 node_ready.go:35] waiting up to 6m0s for node "addons-917645" to be "Ready" ...
	I0108 22:55:56.771577   16328 node_ready.go:49] node "addons-917645" has status "Ready":"True"
	I0108 22:55:56.771596   16328 node_ready.go:38] duration metric: took 4.224843ms waiting for node "addons-917645" to be "Ready" ...
	I0108 22:55:56.771603   16328 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:55:56.777816   16328 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9nmdz" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:56.852375   16328 addons.go:429] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0108 22:55:56.852393   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0108 22:55:56.920906   16328 addons.go:429] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0108 22:55:56.920930   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0108 22:55:56.924683   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0108 22:55:57.005365   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0108 22:55:57.025378   16328 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0108 22:55:57.025396   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0108 22:55:57.113194   16328 addons.go:429] installing /etc/kubernetes/addons/registry-svc.yaml
	I0108 22:55:57.113215   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0108 22:55:57.119599   16328 addons.go:429] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0108 22:55:57.119623   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0108 22:55:57.119709   16328 addons.go:429] installing /etc/kubernetes/addons/ig-role.yaml
	I0108 22:55:57.119727   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0108 22:55:57.164329   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0108 22:55:57.189560   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0108 22:55:57.214343   16328 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0108 22:55:57.214361   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0108 22:55:57.249809   16328 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0108 22:55:57.249829   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0108 22:55:57.273613   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0108 22:55:57.310143   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0108 22:55:57.318638   16328 addons.go:429] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0108 22:55:57.318658   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0108 22:55:57.370410   16328 addons.go:429] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:55:57.370436   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0108 22:55:57.467007   16328 addons.go:429] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0108 22:55:57.467034   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0108 22:55:57.519085   16328 addons.go:429] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0108 22:55:57.519107   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0108 22:55:57.542751   16328 addons.go:429] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0108 22:55:57.542768   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0108 22:55:57.823914   16328 addons.go:429] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0108 22:55:57.823933   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0108 22:55:57.847344   16328 addons.go:429] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 22:55:57.847363   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0108 22:55:57.872259   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0108 22:55:57.877652   16328 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0108 22:55:57.877667   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0108 22:55:57.927922   16328 addons.go:429] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0108 22:55:57.927947   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0108 22:55:58.020650   16328 addons.go:429] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0108 22:55:58.020673   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0108 22:55:58.238590   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0108 22:55:58.265227   16328 addons.go:429] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0108 22:55:58.265249   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0108 22:55:58.290499   16328 addons.go:429] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0108 22:55:58.290520   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0108 22:55:58.385899   16328 addons.go:429] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:55:58.385919   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0108 22:55:58.489984   16328 addons.go:429] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0108 22:55:58.490004   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0108 22:55:58.696193   16328 addons.go:429] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0108 22:55:58.696216   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0108 22:55:58.767011   16328 addons.go:429] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0108 22:55:58.767038   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0108 22:55:58.787495   16328 pod_ready.go:102] pod "coredns-5dd5756b68-9nmdz" in "kube-system" namespace has status "Ready":"False"
	I0108 22:55:58.836728   16328 addons.go:429] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:55:58.836753   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0108 22:55:58.872154   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0108 22:55:58.884626   16328 addons.go:429] installing /etc/kubernetes/addons/ig-crd.yaml
	I0108 22:55:58.884662   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0108 22:55:58.900787   16328 addons.go:429] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:55:58.900806   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0108 22:55:59.122312   16328 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0108 22:55:59.122332   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0108 22:55:59.208158   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0108 22:55:59.295420   16328 pod_ready.go:92] pod "coredns-5dd5756b68-9nmdz" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:59.295448   16328 pod_ready.go:81] duration metric: took 2.517609503s waiting for pod "coredns-5dd5756b68-9nmdz" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.295463   16328 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tf28p" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.320386   16328 pod_ready.go:92] pod "coredns-5dd5756b68-tf28p" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:59.320414   16328 pod_ready.go:81] duration metric: took 24.943514ms waiting for pod "coredns-5dd5756b68-tf28p" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.320427   16328 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-917645" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.331852   16328 pod_ready.go:92] pod "etcd-addons-917645" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:59.331876   16328 pod_ready.go:81] duration metric: took 11.440629ms waiting for pod "etcd-addons-917645" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.331888   16328 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-917645" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.332119   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:55:59.340397   16328 pod_ready.go:92] pod "kube-apiserver-addons-917645" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:59.340414   16328 pod_ready.go:81] duration metric: took 8.519133ms waiting for pod "kube-apiserver-addons-917645" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.340421   16328 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-917645" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.348384   16328 pod_ready.go:92] pod "kube-controller-manager-addons-917645" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:59.348405   16328 pod_ready.go:81] duration metric: took 7.977906ms waiting for pod "kube-controller-manager-addons-917645" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.348415   16328 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9hdbm" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.381686   16328 addons.go:429] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:55:59.381705   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0108 22:55:59.685698   16328 pod_ready.go:92] pod "kube-proxy-9hdbm" in "kube-system" namespace has status "Ready":"True"
	I0108 22:55:59.685727   16328 pod_ready.go:81] duration metric: took 337.303882ms waiting for pod "kube-proxy-9hdbm" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.685741   16328 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-917645" in "kube-system" namespace to be "Ready" ...
	I0108 22:55:59.713692   16328 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0108 22:55:59.713718   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0108 22:55:59.757521   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0108 22:55:59.956457   16328 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0108 22:55:59.956481   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0108 22:56:00.082853   16328 pod_ready.go:92] pod "kube-scheduler-addons-917645" in "kube-system" namespace has status "Ready":"True"
	I0108 22:56:00.082878   16328 pod_ready.go:81] duration metric: took 397.129611ms waiting for pod "kube-scheduler-addons-917645" in "kube-system" namespace to be "Ready" ...
	I0108 22:56:00.082889   16328 pod_ready.go:38] duration metric: took 3.311274214s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0108 22:56:00.082907   16328 api_server.go:52] waiting for apiserver process to appear ...
	I0108 22:56:00.082960   16328 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 22:56:00.280528   16328 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0108 22:56:00.280547   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0108 22:56:00.570597   16328 addons.go:429] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:56:00.570629   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0108 22:56:00.793365   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0108 22:56:02.628520   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.927100769s)
	I0108 22:56:02.628562   16328 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.862185301s)
	I0108 22:56:02.628587   16328 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0108 22:56:02.628567   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:02.628616   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:02.628881   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:02.628921   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:02.628939   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:02.628958   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:02.628971   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:02.629209   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:02.629227   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:02.629242   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:02.960010   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.954616946s)
	I0108 22:56:02.960057   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:02.960065   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:02.960257   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.035547186s)
	I0108 22:56:02.960286   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:02.960294   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:02.960352   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:02.960406   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:02.960425   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:02.960440   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:02.960453   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:02.960664   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:02.960741   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:02.960756   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:02.960768   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:02.960780   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:02.961975   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:02.961981   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:02.961991   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:02.962000   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:02.962011   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:02.962014   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:02.972044   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:02.972062   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:02.972258   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:02.972319   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:02.972282   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:03.043172   16328 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0108 22:56:03.043213   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:56:03.046166   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:56:03.046566   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:56:03.046595   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:56:03.046713   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:56:03.046916   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:56:03.047076   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:56:03.047217   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:56:03.809506   16328 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0108 22:56:04.113170   16328 addons.go:237] Setting addon gcp-auth=true in "addons-917645"
	I0108 22:56:04.113232   16328 host.go:66] Checking if "addons-917645" exists ...
	I0108 22:56:04.113670   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:56:04.113711   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:56:04.128001   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33047
	I0108 22:56:04.128365   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:56:04.128849   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:56:04.128871   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:56:04.129186   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:56:04.129618   16328 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 22:56:04.129663   16328 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 22:56:04.143400   16328 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0108 22:56:04.143770   16328 main.go:141] libmachine: () Calling .GetVersion
	I0108 22:56:04.144523   16328 main.go:141] libmachine: Using API Version  1
	I0108 22:56:04.144548   16328 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 22:56:04.144846   16328 main.go:141] libmachine: () Calling .GetMachineName
	I0108 22:56:04.144993   16328 main.go:141] libmachine: (addons-917645) Calling .GetState
	I0108 22:56:04.146597   16328 main.go:141] libmachine: (addons-917645) Calling .DriverName
	I0108 22:56:04.146829   16328 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0108 22:56:04.146855   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHHostname
	I0108 22:56:04.149397   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:56:04.149739   16328 main.go:141] libmachine: (addons-917645) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:f2:5b", ip: ""} in network mk-addons-917645: {Iface:virbr1 ExpiryTime:2024-01-08 23:55:13 +0000 UTC Type:0 Mac:52:54:00:8d:f2:5b Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:addons-917645 Clientid:01:52:54:00:8d:f2:5b}
	I0108 22:56:04.149773   16328 main.go:141] libmachine: (addons-917645) DBG | domain addons-917645 has defined IP address 192.168.39.75 and MAC address 52:54:00:8d:f2:5b in network mk-addons-917645
	I0108 22:56:04.149915   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHPort
	I0108 22:56:04.150071   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHKeyPath
	I0108 22:56:04.150209   16328 main.go:141] libmachine: (addons-917645) Calling .GetSSHUsername
	I0108 22:56:04.150331   16328 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/addons-917645/id_rsa Username:docker}
	I0108 22:56:05.927937   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.763563326s)
	I0108 22:56:05.927996   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:05.928010   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:05.928283   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:05.928317   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:05.928326   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:05.928342   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:05.928364   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:05.928698   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:05.928716   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:06.003053   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:06.003073   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:06.003350   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:06.003374   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:06.003375   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.141410   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.951813445s)
	I0108 22:56:08.141457   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.867807549s)
	I0108 22:56:08.141461   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.141523   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.141541   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.83137287s)
	I0108 22:56:08.141565   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.141580   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.141494   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.141608   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.141625   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.269340837s)
	I0108 22:56:08.141651   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.141667   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.141671   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.903056453s)
	I0108 22:56:08.141689   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.141700   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.141746   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.269556917s)
	I0108 22:56:08.141780   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.141793   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.141882   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.933696327s)
	I0108 22:56:08.141906   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.141917   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.141967   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.141987   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.142018   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.142030   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.142040   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.142043   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.809900282s)
	I0108 22:56:08.142048   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	W0108 22:56:08.142065   16328 addons.go:455] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:56:08.142077   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.142089   16328 retry.go:31] will retry after 198.577757ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0108 22:56:08.142100   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.142111   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.142120   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.142155   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.384604341s)
	I0108 22:56:08.142174   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.142183   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.142235   16328 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (8.059258708s)
	I0108 22:56:08.142251   16328 api_server.go:72] duration metric: took 11.705224366s to wait for apiserver process to appear ...
	I0108 22:56:08.142258   16328 api_server.go:88] waiting for apiserver healthz status ...
	I0108 22:56:08.142272   16328 api_server.go:253] Checking apiserver healthz at https://192.168.39.75:8443/healthz ...
	I0108 22:56:08.142356   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.142412   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.142441   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.142458   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.142474   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.142538   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.142557   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.142591   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.142612   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.142633   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.142639   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.142674   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.142695   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.142710   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.142719   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.142732   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.142676   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.143953   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.143976   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.143986   16328 addons.go:473] Verifying addon registry=true in "addons-917645"
	I0108 22:56:08.142660   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.142615   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.144295   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.142595   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.144355   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.144383   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.144402   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.144409   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.144430   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.144576   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.145745   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.145747   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.146280   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.146294   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.146299   16328 addons.go:473] Verifying addon ingress=true in "addons-917645"
	I0108 22:56:08.146285   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.146303   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.146376   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.146178   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.146433   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.146453   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.146169   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.146545   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.146562   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.146193   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.146155   16328 out.go:177] * Verifying registry addon...
	I0108 22:56:08.146584   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:08.146596   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:08.146650   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.146187   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.149487   16328 out.go:177] * Verifying ingress addon...
	I0108 22:56:08.147747   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.148177   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:08.148207   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:08.148819   16328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0108 22:56:08.151475   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.152097   16328 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0108 22:56:08.153310   16328 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-917645 service yakd-dashboard -n yakd-dashboard
	
	
	I0108 22:56:08.153349   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:08.153383   16328 addons.go:473] Verifying addon metrics-server=true in "addons-917645"
	I0108 22:56:08.163129   16328 api_server.go:279] https://192.168.39.75:8443/healthz returned 200:
	ok
	I0108 22:56:08.170534   16328 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0108 22:56:08.170552   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:08.171084   16328 api_server.go:141] control plane version: v1.28.4
	I0108 22:56:08.171101   16328 api_server.go:131] duration metric: took 28.838841ms to wait for apiserver health ...
	I0108 22:56:08.171109   16328 system_pods.go:43] waiting for kube-system pods to appear ...
	I0108 22:56:08.179722   16328 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0108 22:56:08.179738   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:08.196615   16328 system_pods.go:59] 16 kube-system pods found
	I0108 22:56:08.196659   16328 system_pods.go:61] "coredns-5dd5756b68-9nmdz" [f70c431b-69b8-4139-a855-c3863a2c98b3] Running
	I0108 22:56:08.196666   16328 system_pods.go:61] "coredns-5dd5756b68-tf28p" [dfee0af7-ed20-4093-8ea7-99cf6224a775] Running
	I0108 22:56:08.196672   16328 system_pods.go:61] "etcd-addons-917645" [299a20ed-999f-45d7-b8d9-4dab63c1baa3] Running
	I0108 22:56:08.196679   16328 system_pods.go:61] "kube-apiserver-addons-917645" [9faeb733-f6b0-48a6-be3e-4a9dd1ad3003] Running
	I0108 22:56:08.196685   16328 system_pods.go:61] "kube-controller-manager-addons-917645" [d5d62cb0-5b1c-49a2-8c65-c39b4c18ab82] Running
	I0108 22:56:08.196696   16328 system_pods.go:61] "kube-ingress-dns-minikube" [0270c103-2342-4e40-bb9c-52115430161a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0108 22:56:08.196705   16328 system_pods.go:61] "kube-proxy-9hdbm" [7ec9eab1-5228-46b6-ad55-13c22f4c3ed7] Running
	I0108 22:56:08.196714   16328 system_pods.go:61] "kube-scheduler-addons-917645" [4be9d202-26c7-48aa-8ca2-fb2a339f1869] Running
	I0108 22:56:08.196724   16328 system_pods.go:61] "metrics-server-7c66d45ddc-ltcxd" [571077f8-dd80-4589-b5ee-6d050e13715d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:56:08.196742   16328 system_pods.go:61] "nvidia-device-plugin-daemonset-9rk2b" [98a3c63e-8291-437f-a7d0-e630bf91f6de] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0108 22:56:08.196753   16328 system_pods.go:61] "registry-cghz4" [5cab67d2-562a-4226-8ed1-603f82df4ccd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0108 22:56:08.196768   16328 system_pods.go:61] "registry-proxy-vnwfs" [f26fea03-edb3-469e-ac2b-dde7893070a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0108 22:56:08.196787   16328 system_pods.go:61] "snapshot-controller-58dbcc7b99-q5vxq" [b1b017f5-9fd5-4c55-9edd-bb19ec1ec35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 22:56:08.196802   16328 system_pods.go:61] "snapshot-controller-58dbcc7b99-vj5rp" [416374f3-1e13-4d42-88d2-b8d220eeb9e0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 22:56:08.196814   16328 system_pods.go:61] "storage-provisioner" [6af44ee2-0533-4f28-a6a9-55d523c1f797] Running
	I0108 22:56:08.196827   16328 system_pods.go:61] "tiller-deploy-7b677967b9-czsq5" [11e488b3-2ed7-40a9-9ca3-595f8137fc7e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0108 22:56:08.196838   16328 system_pods.go:74] duration metric: took 25.721201ms to wait for pod list to return data ...
	I0108 22:56:08.196858   16328 default_sa.go:34] waiting for default service account to be created ...
	I0108 22:56:08.199944   16328 default_sa.go:45] found service account: "default"
	I0108 22:56:08.199959   16328 default_sa.go:55] duration metric: took 3.091511ms for default service account to be created ...
	I0108 22:56:08.199966   16328 system_pods.go:116] waiting for k8s-apps to be running ...
	I0108 22:56:08.210182   16328 system_pods.go:86] 16 kube-system pods found
	I0108 22:56:08.210204   16328 system_pods.go:89] "coredns-5dd5756b68-9nmdz" [f70c431b-69b8-4139-a855-c3863a2c98b3] Running
	I0108 22:56:08.210217   16328 system_pods.go:89] "coredns-5dd5756b68-tf28p" [dfee0af7-ed20-4093-8ea7-99cf6224a775] Running
	I0108 22:56:08.210223   16328 system_pods.go:89] "etcd-addons-917645" [299a20ed-999f-45d7-b8d9-4dab63c1baa3] Running
	I0108 22:56:08.210229   16328 system_pods.go:89] "kube-apiserver-addons-917645" [9faeb733-f6b0-48a6-be3e-4a9dd1ad3003] Running
	I0108 22:56:08.210239   16328 system_pods.go:89] "kube-controller-manager-addons-917645" [d5d62cb0-5b1c-49a2-8c65-c39b4c18ab82] Running
	I0108 22:56:08.210250   16328 system_pods.go:89] "kube-ingress-dns-minikube" [0270c103-2342-4e40-bb9c-52115430161a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0108 22:56:08.210261   16328 system_pods.go:89] "kube-proxy-9hdbm" [7ec9eab1-5228-46b6-ad55-13c22f4c3ed7] Running
	I0108 22:56:08.210276   16328 system_pods.go:89] "kube-scheduler-addons-917645" [4be9d202-26c7-48aa-8ca2-fb2a339f1869] Running
	I0108 22:56:08.210291   16328 system_pods.go:89] "metrics-server-7c66d45ddc-ltcxd" [571077f8-dd80-4589-b5ee-6d050e13715d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0108 22:56:08.210317   16328 system_pods.go:89] "nvidia-device-plugin-daemonset-9rk2b" [98a3c63e-8291-437f-a7d0-e630bf91f6de] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0108 22:56:08.210331   16328 system_pods.go:89] "registry-cghz4" [5cab67d2-562a-4226-8ed1-603f82df4ccd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0108 22:56:08.210346   16328 system_pods.go:89] "registry-proxy-vnwfs" [f26fea03-edb3-469e-ac2b-dde7893070a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0108 22:56:08.210358   16328 system_pods.go:89] "snapshot-controller-58dbcc7b99-q5vxq" [b1b017f5-9fd5-4c55-9edd-bb19ec1ec35d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 22:56:08.210372   16328 system_pods.go:89] "snapshot-controller-58dbcc7b99-vj5rp" [416374f3-1e13-4d42-88d2-b8d220eeb9e0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0108 22:56:08.210385   16328 system_pods.go:89] "storage-provisioner" [6af44ee2-0533-4f28-a6a9-55d523c1f797] Running
	I0108 22:56:08.210399   16328 system_pods.go:89] "tiller-deploy-7b677967b9-czsq5" [11e488b3-2ed7-40a9-9ca3-595f8137fc7e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0108 22:56:08.210411   16328 system_pods.go:126] duration metric: took 10.43864ms to wait for k8s-apps to be running ...
	I0108 22:56:08.210425   16328 system_svc.go:44] waiting for kubelet service to be running ....
	I0108 22:56:08.210473   16328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 22:56:08.341286   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0108 22:56:08.661596   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:08.661800   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:09.162205   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:09.162314   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:09.682235   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:09.682303   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:10.170780   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:10.171562   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:10.668241   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:10.672342   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:10.809461   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.016038793s)
	I0108 22:56:10.809496   16328 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.66264236s)
	I0108 22:56:10.809511   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:10.809530   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:10.809537   16328 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.599041657s)
	I0108 22:56:10.809563   16328 system_svc.go:56] duration metric: took 2.599136424s WaitForService to wait for kubelet.
	I0108 22:56:10.809613   16328 kubeadm.go:581] duration metric: took 14.372586231s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0108 22:56:10.809639   16328 node_conditions.go:102] verifying NodePressure condition ...
	I0108 22:56:10.809827   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:10.809855   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:10.824468   16328 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0108 22:56:10.824486   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:10.816292   16328 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0108 22:56:10.826067   16328 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0108 22:56:10.827868   16328 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0108 22:56:10.827884   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0108 22:56:10.826076   16328 node_conditions.go:123] node cpu capacity is 2
	I0108 22:56:10.827948   16328 node_conditions.go:105] duration metric: took 18.295331ms to run NodePressure ...
	I0108 22:56:10.827968   16328 start.go:228] waiting for startup goroutines ...
	I0108 22:56:10.826103   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:10.828000   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:10.828267   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:10.828281   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:10.828291   16328 addons.go:473] Verifying addon csi-hostpath-driver=true in "addons-917645"
	I0108 22:56:10.829817   16328 out.go:177] * Verifying csi-hostpath-driver addon...
	I0108 22:56:10.831706   16328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0108 22:56:10.847157   16328 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0108 22:56:10.847173   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:11.009819   16328 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0108 22:56:11.009837   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0108 22:56:11.131574   16328 addons.go:429] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:56:11.131597   16328 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0108 22:56:11.161528   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:11.163173   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:11.201597   16328 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0108 22:56:11.341407   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:11.663560   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:11.663609   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:11.843737   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:12.159553   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:12.165552   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:12.283543   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.942201716s)
	I0108 22:56:12.283603   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:12.283620   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:12.283922   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:12.283943   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:12.283956   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:12.283967   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:12.284255   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:12.284274   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:12.284282   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:12.339143   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:12.659413   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:12.660671   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:12.838134   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:13.175976   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:13.176042   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:13.311982   16328 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.110345068s)
	I0108 22:56:13.312039   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:13.312059   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:13.312352   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:13.312372   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:13.312382   16328 main.go:141] libmachine: Making call to close driver server
	I0108 22:56:13.312357   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:13.312392   16328 main.go:141] libmachine: (addons-917645) Calling .Close
	I0108 22:56:13.312691   16328 main.go:141] libmachine: (addons-917645) DBG | Closing plugin on server side
	I0108 22:56:13.312725   16328 main.go:141] libmachine: Successfully made call to close driver server
	I0108 22:56:13.312742   16328 main.go:141] libmachine: Making call to close connection to plugin binary
	I0108 22:56:13.314642   16328 addons.go:473] Verifying addon gcp-auth=true in "addons-917645"
	I0108 22:56:13.316309   16328 out.go:177] * Verifying gcp-auth addon...
	I0108 22:56:13.318871   16328 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0108 22:56:13.327997   16328 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0108 22:56:13.328020   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:13.337398   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:13.660566   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:13.660758   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:13.823501   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:13.838318   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:14.157876   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:14.158355   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:14.322418   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:14.336924   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:14.659483   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:14.659751   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:14.823027   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:14.837464   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:15.160592   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:15.160735   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:15.323444   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:15.336930   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:15.660145   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:15.660366   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:15.822485   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:15.843435   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:16.163478   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:16.165050   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:16.322410   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:16.341813   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:16.658587   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:16.658872   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:16.823378   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:16.837853   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:17.359417   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:17.361340   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:17.364892   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:17.365174   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:17.660465   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:17.660770   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:17.822877   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:17.840144   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:18.157571   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:18.158251   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:18.323516   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:18.338363   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:18.658387   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:18.661451   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:18.822919   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:18.839063   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:19.158073   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:19.158171   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:19.322594   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:19.337240   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:19.658099   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:19.658749   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:19.825449   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:19.840349   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:20.159918   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:20.160039   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:20.324296   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:20.339983   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:20.659009   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:20.659558   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:20.822719   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:20.843074   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:21.159762   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:21.160684   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:21.322700   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:21.337268   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:21.659134   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:21.661168   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:21.825690   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:21.839281   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:22.159322   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:22.159520   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:22.322869   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:22.342100   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:22.660743   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:22.660849   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:22.823153   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:22.837950   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:23.160326   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:23.160575   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:23.322466   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:23.336826   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:23.659911   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:23.660086   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:23.823435   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:23.837221   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:24.167673   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:24.171932   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:24.324840   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:24.339081   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:24.658684   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:24.659630   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:24.822783   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:24.837177   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:25.163269   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:25.163389   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:25.326711   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:25.338715   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:25.675629   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:25.675991   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:25.823151   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:25.837563   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:26.159419   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:26.163294   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:26.322694   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:26.337300   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:26.658537   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:26.659079   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:26.823566   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:26.837658   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:27.159839   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:27.160851   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:27.323000   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:27.336921   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:27.659514   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:27.659757   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:27.824336   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:27.838541   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:28.158737   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:28.158787   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:28.323104   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:28.337252   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:28.657994   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:28.661141   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:28.823401   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:28.838184   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:29.159637   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:29.159700   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:29.328134   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:29.337684   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:29.659669   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:29.662073   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:29.822745   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:29.838140   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:30.159539   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:30.159673   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:30.323093   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:30.337588   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:30.659236   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:30.661554   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:30.823831   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:30.838587   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:31.157555   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:31.158514   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:31.323022   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:31.336473   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:31.659469   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:31.659787   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:31.823332   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:31.837509   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:32.158743   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:32.159042   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:32.323281   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:32.337977   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:32.659003   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:32.659356   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:32.822574   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:32.842430   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:33.158361   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:33.158602   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:33.322717   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:33.337697   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:33.659572   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:33.659860   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:33.823831   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:33.839919   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:34.160206   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:34.160725   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:34.322634   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:34.337632   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:34.659720   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:34.660485   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:34.823847   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:34.843129   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:35.158429   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:35.158536   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:35.323073   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:35.337401   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:35.659854   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:35.660788   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:35.823189   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:35.836767   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:36.164474   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:36.167461   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:36.323112   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:36.337720   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:36.660874   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:36.661041   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:36.823765   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:36.838684   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:37.160612   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:37.160753   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:37.322945   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:37.336843   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:37.658679   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:37.660557   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:37.822708   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:37.838006   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:38.160662   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:38.160793   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:38.323061   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:38.336819   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:38.659137   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:38.660315   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:38.822730   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:38.842337   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:39.160266   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:39.160448   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:39.323393   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:39.336895   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:39.658335   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:39.658989   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:39.823688   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:39.837179   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:40.159857   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:40.163951   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:40.323469   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:40.339473   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:40.660400   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:40.660707   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:40.822410   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:40.836988   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:41.159620   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:41.159997   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:41.323040   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:41.338602   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:41.658488   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:41.658537   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:41.823243   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:41.837221   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:42.158858   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:42.160836   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:42.324387   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:42.337172   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:42.658628   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:42.659251   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:42.822829   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:42.837202   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:43.163774   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:43.176703   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:43.323687   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:43.339104   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:43.659118   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:43.660763   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:43.823347   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:43.838033   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:44.161571   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:44.162172   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:44.323335   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:44.338682   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:44.659088   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:44.660814   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:44.823641   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:44.838609   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:45.163067   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:45.163843   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:45.323551   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:45.337967   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:45.659301   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:45.660096   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:45.825815   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:45.839106   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:46.159472   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:46.161501   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:46.323010   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:46.338695   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:46.660815   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:46.660949   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:46.823256   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:46.837804   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:47.159429   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:47.160846   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:47.323073   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:47.337338   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:47.659351   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:47.660852   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:47.822637   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:47.838264   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:48.160939   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:48.161187   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:48.323361   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:48.336986   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:48.658661   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:48.658970   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:48.823088   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:48.836912   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:49.161180   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:49.161961   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:49.323294   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:49.345230   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:49.738422   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:49.739374   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:49.823317   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:49.838187   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:50.160419   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:50.160461   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:50.324107   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:50.337268   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:50.658225   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:50.659624   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:50.823215   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:50.837073   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:51.159374   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:51.160156   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:51.322939   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:51.343426   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:51.658524   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:51.659211   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:51.822631   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:51.838938   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:52.159146   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:52.161243   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:52.323490   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:52.339763   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:52.659285   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:52.661074   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:52.823237   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:52.837881   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:53.160763   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:53.164405   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:53.322504   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:53.338159   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:53.660268   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:53.660829   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:53.822458   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:53.837806   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:54.158838   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:54.160096   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:54.322937   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:54.336644   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:54.659474   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:54.664242   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:54.822247   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:54.837147   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:55.159852   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:55.160359   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:55.323420   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:55.337009   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:55.658988   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:55.662452   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:55.823169   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:55.841227   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:56.161151   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:56.161332   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:56.323028   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:56.341688   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:56.658853   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:56.660903   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:56.823021   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:56.838046   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:57.157394   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:57.158159   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:57.322423   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:57.337022   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:57.660074   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:57.660120   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:57.823484   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:57.839097   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:58.159786   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:58.162748   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:58.323084   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:58.342325   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:58.660610   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:58.662822   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:58.822804   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:58.838092   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:59.158038   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:59.158180   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:59.324555   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:59.340300   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:56:59.659777   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:56:59.661252   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:56:59.822088   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:56:59.837669   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:00.162367   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:00.162506   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:00.323266   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:00.346921   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:00.659331   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:00.659466   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:00.822257   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:00.836859   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:01.158452   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:01.159142   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:01.322694   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:01.337488   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:01.658822   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:01.659511   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:01.822779   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:01.837294   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:02.158804   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:02.159554   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:02.323092   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:02.337126   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:02.659533   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:02.661393   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:02.822882   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:02.838003   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:03.159445   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:03.159843   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:03.323046   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:03.337604   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:03.659642   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:03.659708   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:03.835995   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:03.841887   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:04.158424   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:04.159238   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:04.322593   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:04.337626   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:04.659211   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:04.660595   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:04.822714   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:04.837604   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:05.159692   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:05.161386   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:05.322456   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:05.340855   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:05.982408   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:05.982541   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:05.982716   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:05.986662   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:06.160002   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:06.160607   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:06.323479   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:06.338095   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:06.658308   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:06.659193   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:06.823960   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:06.847144   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:07.160384   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:07.160995   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:07.323782   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:07.338429   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:07.661816   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:07.662164   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:07.825779   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:07.841275   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:08.158227   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:08.159122   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:08.325303   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:08.336680   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:08.658955   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:08.659913   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:08.823731   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:08.838106   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:09.161487   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:09.162074   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:09.323058   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:09.340317   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:09.662565   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:09.662692   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:09.823983   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:09.837810   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:10.158162   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:10.158333   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:10.322861   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:10.339007   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:10.661178   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:10.661177   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:10.823242   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:10.837344   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:11.159937   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:11.160117   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:11.327019   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:11.336342   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:11.660141   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:11.660410   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:11.822257   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:11.837323   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:12.161086   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:12.161810   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:12.324909   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:12.341133   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:12.661245   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:12.661345   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0108 22:57:12.825195   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:12.836680   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:13.401705   16328 kapi.go:107] duration metric: took 1m5.252884098s to wait for kubernetes.io/minikube-addons=registry ...
	I0108 22:57:13.401986   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:13.402969   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:13.405952   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:13.658264   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:13.823683   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:13.837777   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:14.157856   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:14.322289   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:14.337045   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:14.658666   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:14.823453   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:14.837915   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:15.167406   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:15.337676   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:15.338993   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:15.658647   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:15.823876   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:15.839833   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:16.164386   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:16.322638   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:16.337508   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:16.658512   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:16.823098   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:16.837582   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:17.158671   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:17.322236   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:17.337463   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:17.664283   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:17.953451   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:17.957693   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:18.163672   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:18.322795   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:18.341787   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:18.658650   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:18.824341   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:18.838565   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:19.158453   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:19.342460   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:19.343091   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:19.663646   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:19.824651   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:19.841232   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:20.160623   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:20.324169   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:20.337301   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:20.658194   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:20.822694   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:20.841479   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:21.157562   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:21.322474   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:21.337743   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:21.658310   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:21.823782   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:21.837074   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:22.157825   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:22.324432   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:22.338217   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:22.658702   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:22.822645   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:22.837166   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:23.158123   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:23.322925   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:23.338196   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:23.658533   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:23.822815   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:23.837420   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:24.158295   16328 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0108 22:57:24.323104   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:24.338184   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:24.657375   16328 kapi.go:107] duration metric: took 1m16.505274346s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0108 22:57:24.823632   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:24.839059   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:25.323208   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:25.338176   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:25.822930   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:25.838037   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:26.323325   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:26.338547   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:26.825675   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:26.841480   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:27.325009   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:27.338582   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:27.824697   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:27.838642   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:28.340848   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:28.347017   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:28.823023   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:28.837929   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:29.322986   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:29.337895   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:29.823062   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:29.837313   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:30.332456   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:30.360424   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:30.901230   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:30.901642   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:31.323620   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:31.337360   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0108 22:57:31.823523   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:31.838057   16328 kapi.go:107] duration metric: took 1m21.006346452s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0108 22:57:32.323124   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:32.823127   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:33.322680   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:33.822471   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:34.323191   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:34.822768   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:35.322562   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:35.823288   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:36.324009   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:36.823216   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:37.323381   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:37.827451   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:38.322961   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:38.823961   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:39.322476   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:39.823175   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:40.323615   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:40.824177   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:41.323596   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:41.828181   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:42.323317   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:42.824045   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:43.323019   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:43.822815   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:44.324481   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:44.824347   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:45.322924   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:45.823176   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:46.323277   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:46.822790   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:47.323483   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:47.823256   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:48.323262   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:48.823184   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:49.322976   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:49.823356   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:50.323034   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:50.823149   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:51.323164   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:51.822763   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:52.323304   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:52.824287   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:53.322436   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:53.823306   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:54.323239   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:54.822771   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:55.322509   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:55.823400   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:56.323407   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:56.823624   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:57.323321   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:57.823120   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:58.322531   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:58.823760   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:59.323553   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:57:59.823677   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:00.322497   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:00.823562   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:01.323807   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:01.823606   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:02.323408   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:02.823234   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:03.322717   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:03.822717   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:04.323810   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:04.822745   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:05.322960   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:05.823199   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:06.324016   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:06.822548   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:07.323246   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:07.823710   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:08.323780   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:08.823103   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:09.322759   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:09.822961   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:10.322911   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:10.822866   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:11.322736   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:11.823299   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:12.322754   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:12.823529   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:13.323593   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:13.823780   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:14.322203   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:14.822963   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:15.322933   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:15.823147   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:16.323082   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:16.822642   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:17.323228   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:17.822634   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:18.322236   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:18.823165   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:19.323432   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:19.823523   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:20.323353   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:20.823794   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:21.322737   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:21.823500   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:22.323015   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:22.822952   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:23.324361   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:23.823661   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:24.323740   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:24.823061   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:25.322894   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:25.822560   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:26.323420   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:26.823684   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:27.323832   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:27.823282   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:28.323387   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:28.824414   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:29.323257   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:29.823545   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:30.323463   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:30.823398   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:31.323480   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:31.823149   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:32.324156   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:32.822900   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:33.322982   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:33.823258   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:34.323129   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:34.823708   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:35.323449   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:35.824173   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:36.325088   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:36.822611   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:37.323121   16328 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0108 22:58:37.825349   16328 kapi.go:107] duration metric: took 2m24.506476917s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0108 22:58:37.826872   16328 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-917645 cluster.
	I0108 22:58:37.828225   16328 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0108 22:58:37.829440   16328 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0108 22:58:37.830660   16328 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, default-storageclass, storage-provisioner-rancher, nvidia-device-plugin, storage-provisioner, helm-tiller, yakd, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0108 22:58:37.831821   16328 addons.go:508] enable addons completed in 2m41.519169221s: enabled=[cloud-spanner ingress-dns default-storageclass storage-provisioner-rancher nvidia-device-plugin storage-provisioner helm-tiller yakd inspektor-gadget metrics-server volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0108 22:58:37.831851   16328 start.go:233] waiting for cluster config update ...
	I0108 22:58:37.831868   16328 start.go:242] writing updated cluster config ...
	I0108 22:58:37.832097   16328 ssh_runner.go:195] Run: rm -f paused
	I0108 22:58:37.882860   16328 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0108 22:58:37.884672   16328 out.go:177] * Done! kubectl is now configured to use "addons-917645" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	a0a6194accc6d       9211bbaa0dbd6       1 second ago         Exited              busybox                                  0                   d12e66512f285       test-local-path
	0a72939323f71       a416a98b71e22       8 seconds ago        Exited              helper-pod                               0                   6f5f9b63aaa42       helper-pod-create-pvc-99fa07b4-2440-40b1-b052-322948fe5cc8
	91f7b12963438       6d2a98b274382       13 seconds ago       Running             gcp-auth                                 0                   a8beb505c92e2       gcp-auth-d4c87556c-5d79l
	443a60aeb2340       d378d53ef198d       44 seconds ago       Exited              gadget                                   3                   25165580f7032       gadget-fb4xp
	6a98d97cfc573       738351fd438f0       About a minute ago   Running             csi-snapshotter                          0                   746a3bc312a76       csi-hostpathplugin-mcf7x
	db5dd3448d30e       931dbfd16f87c       About a minute ago   Running             csi-provisioner                          0                   746a3bc312a76       csi-hostpathplugin-mcf7x
	33be952c60c7b       e899260153aed       About a minute ago   Running             liveness-probe                           0                   746a3bc312a76       csi-hostpathplugin-mcf7x
	f261947513005       e255e073c508c       About a minute ago   Running             hostpath                                 0                   746a3bc312a76       csi-hostpathplugin-mcf7x
	b7417425495a1       88ef14a257f42       About a minute ago   Running             node-driver-registrar                    0                   746a3bc312a76       csi-hostpathplugin-mcf7x
	503fbbafa3a04       311f90a3747fd       About a minute ago   Running             controller                               0                   dc2c708d70d2b       ingress-nginx-controller-69cff4fd79-fvswh
	3828ce4688590       19a639eda60f0       About a minute ago   Running             csi-resizer                              0                   c037baecf2366       csi-hostpath-resizer-0
	90eb62674386c       a1ed5895ba635       About a minute ago   Running             csi-external-health-monitor-controller   0                   746a3bc312a76       csi-hostpathplugin-mcf7x
	981babd85df95       59cbb42146a37       About a minute ago   Running             csi-attacher                             0                   9a32e2b11506f       csi-hostpath-attacher-0
	81eab28e6a393       1ebff0f9671bc       About a minute ago   Exited              patch                                    0                   b07340b85695d       ingress-nginx-admission-patch-nrmj7
	5c0445cb30813       d2fd211e7dcaa       About a minute ago   Running             registry-proxy                           0                   f5813b76dc382       registry-proxy-vnwfs
	d53d923c3af1d       1ebff0f9671bc       About a minute ago   Exited              create                                   0                   4987098198ce9       ingress-nginx-admission-create-tbkdj
	401811b95671e       e16d1e3a10667       About a minute ago   Running             local-path-provisioner                   0                   58f4af1a81189       local-path-provisioner-78b46b4d5c-zwbhf
	234633449e7ed       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   a467c35b0886b       snapshot-controller-58dbcc7b99-q5vxq
	360b0b7ad9738       aa61ee9c70bc4       About a minute ago   Running             volume-snapshot-controller               0                   577e63e4ac5f8       snapshot-controller-58dbcc7b99-vj5rp
	6c2b51c962f7b       909c3ff012b7f       About a minute ago   Running             registry                                 0                   4855df258dbda       registry-cghz4
	61987d591f1d2       31de47c733c91       About a minute ago   Running             yakd                                     0                   eaa946d43046a       yakd-dashboard-9947fc6bf-w2t4g
	49eab5917617b       a608c686bac93       2 minutes ago        Running             metrics-server                           0                   0dd869183362d       metrics-server-7c66d45ddc-ltcxd
	ff64780568b7c       754854eab8c1c       2 minutes ago        Running             cloud-spanner-emulator                   0                   eb7269eacb372       cloud-spanner-emulator-64c8c85f65-msd9f
	1de91e933cc96       3f39089e90831       2 minutes ago        Running             tiller                                   0                   0f04992b8da04       tiller-deploy-7b677967b9-czsq5
	77ecfac6207b1       1499ed4fbd0aa       2 minutes ago        Running             minikube-ingress-dns                     0                   886e3da8e3f3d       kube-ingress-dns-minikube
	9bc7964468ef4       8cfc3f994a82b       2 minutes ago        Running             nvidia-device-plugin-ctr                 0                   6bff10c785e45       nvidia-device-plugin-daemonset-9rk2b
	cedddb8915c0d       6e38f40d628db       2 minutes ago        Running             storage-provisioner                      0                   0c5863ca1ff44       storage-provisioner
	4a31cc7af934f       ead0a4a53df89       2 minutes ago        Running             coredns                                  0                   d3c31f50ecb33       coredns-5dd5756b68-9nmdz
	9fab34023d467       ead0a4a53df89       2 minutes ago        Running             coredns                                  0                   1e799a54f28ad       coredns-5dd5756b68-tf28p
	4fa091f9e7312       83f6cc407eed8       2 minutes ago        Running             kube-proxy                               0                   7b539642ead9e       kube-proxy-9hdbm
	c475687559087       73deb9a3f7025       3 minutes ago        Running             etcd                                     0                   f8871ab0892b3       etcd-addons-917645
	6fa43fa4ec8f0       e3db313c6dbc0       3 minutes ago        Running             kube-scheduler                           0                   a1cbc2de759c0       kube-scheduler-addons-917645
	ccf9e142e94a1       d058aa5ab969c       3 minutes ago        Running             kube-controller-manager                  0                   ee9e02dca26a0       kube-controller-manager-addons-917645
	49c68ec901e31       7fe0e6f37db33       3 minutes ago        Running             kube-apiserver                           0                   1ce360ff3fee8       kube-apiserver-addons-917645
	
	
	==> containerd <==
	-- Journal begins at Mon 2024-01-08 22:55:09 UTC, ends at Mon 2024-01-08 22:58:50 UTC. --
	Jan 08 22:58:45 addons-917645 containerd[683]: time="2024-01-08T22:58:45.718347553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 08 22:58:45 addons-917645 containerd[683]: time="2024-01-08T22:58:45.718361843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 08 22:58:46 addons-917645 containerd[683]: time="2024-01-08T22:58:46.092937843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-local-path,Uid:b1516ec2-f75f-47f6-a439-a6ea9374c130,Namespace:default,Attempt:0,} returns sandbox id \"d12e66512f285c7cdbc0e4475958171ea9fd25962c4a555908665c4a26f64344\""
	Jan 08 22:58:46 addons-917645 containerd[683]: time="2024-01-08T22:58:46.097241025Z" level=info msg="PullImage \"busybox:stable\""
	Jan 08 22:58:46 addons-917645 containerd[683]: time="2024-01-08T22:58:46.101654932Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 08 22:58:46 addons-917645 containerd[683]: time="2024-01-08T22:58:46.232107722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:task-pv-pod,Uid:001b248f-45e1-4206-90a3-a06d06fc2c74,Namespace:default,Attempt:0,} returns sandbox id \"b08c36a8948578af4c583a39bbb5c8dbe29f8802ac50d3d1cae23814fbbb9111\""
	Jan 08 22:58:47 addons-917645 containerd[683]: time="2024-01-08T22:58:47.026859594Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 08 22:58:48 addons-917645 containerd[683]: time="2024-01-08T22:58:48.995544734Z" level=info msg="ImageCreate event name:\"docker.io/library/busybox:stable\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 08 22:58:48 addons-917645 containerd[683]: time="2024-01-08T22:58:48.998021058Z" level=info msg="stop pulling image docker.io/library/busybox:stable: active requests=0, bytes read=2239851"
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.000479013Z" level=info msg="ImageCreate event name:\"sha256:9211bbaa0dbd68fed073065eb9f0a6ed00a75090a9235eca2554c62d1e75c58f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.007833446Z" level=info msg="ImageUpdate event name:\"docker.io/library/busybox:stable\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.011810102Z" level=info msg="ImageCreate event name:\"docker.io/library/busybox@sha256:ba76950ac9eaa407512c9d859cea48114eeff8a6f12ebaa5d32ce79d4a017dd8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.014042838Z" level=info msg="Pulled image \"busybox:stable\" with image id \"sha256:9211bbaa0dbd68fed073065eb9f0a6ed00a75090a9235eca2554c62d1e75c58f\", repo tag \"docker.io/library/busybox:stable\", repo digest \"docker.io/library/busybox@sha256:ba76950ac9eaa407512c9d859cea48114eeff8a6f12ebaa5d32ce79d4a017dd8\", size \"2231053\" in 2.916608487s"
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.014123763Z" level=info msg="PullImage \"busybox:stable\" returns image reference \"sha256:9211bbaa0dbd68fed073065eb9f0a6ed00a75090a9235eca2554c62d1e75c58f\""
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.018077419Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.022062940Z" level=info msg="CreateContainer within sandbox \"d12e66512f285c7cdbc0e4475958171ea9fd25962c4a555908665c4a26f64344\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.023020343Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.048107909Z" level=info msg="CreateContainer within sandbox \"d12e66512f285c7cdbc0e4475958171ea9fd25962c4a555908665c4a26f64344\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"a0a6194accc6da92938a6e162813941c7f3082c4d1236fe0923f629f4b4d0684\""
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.051716281Z" level=info msg="StartContainer for \"a0a6194accc6da92938a6e162813941c7f3082c4d1236fe0923f629f4b4d0684\""
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.174590958Z" level=info msg="StartContainer for \"a0a6194accc6da92938a6e162813941c7f3082c4d1236fe0923f629f4b4d0684\" returns successfully"
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.250696350Z" level=info msg="shim disconnected" id=a0a6194accc6da92938a6e162813941c7f3082c4d1236fe0923f629f4b4d0684 namespace=k8s.io
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.250964544Z" level=warning msg="cleaning up after shim disconnected" id=a0a6194accc6da92938a6e162813941c7f3082c4d1236fe0923f629f4b4d0684 namespace=k8s.io
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.251031893Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jan 08 22:58:49 addons-917645 containerd[683]: time="2024-01-08T22:58:49.935818707Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Jan 08 22:58:50 addons-917645 containerd[683]: time="2024-01-08T22:58:50.380163427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:registry-test,Uid:405a9d46-ead7-4895-b876-a9d1e03f7376,Namespace:default,Attempt:0,}"
	
	
	==> coredns [4a31cc7af934fca10c6ead8d610d747574a4f5f88acdf9aefb584ee8db0c020f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:56178 - 44439 "HINFO IN 933999997403560457.6625470405339931052. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009113281s
	[INFO] 10.244.0.22:34196 - 39416 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000463216s
	[INFO] 10.244.0.22:40883 - 57082 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014167s
	[INFO] 10.244.0.22:53372 - 22767 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011521s
	[INFO] 10.244.0.22:52640 - 43594 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167597s
	[INFO] 10.244.0.22:41724 - 47282 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.000377898s
	
	
	==> coredns [9fab34023d467594f7c68c14fd041ab7848ac572b7b17e8a8ed62ea6cb0b0b7a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55373 - 53966 "HINFO IN 1214538106415731088.2964547491972966439. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007127962s
	[INFO] 10.244.0.22:48749 - 115 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000347224s
	[INFO] 10.244.0.22:53748 - 15926 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122408s
	[INFO] 10.244.0.22:54198 - 17905 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000516891s
	
	
	==> describe nodes <==
	Name:               addons-917645
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-917645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2af307dcbdf6e6ad5b00357c8e830bd90e7b60a
	                    minikube.k8s.io/name=addons-917645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_08T22_55_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-917645
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-917645"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Jan 2024 22:55:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-917645
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Jan 2024 22:58:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Jan 2024 22:58:46 +0000   Mon, 08 Jan 2024 22:55:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Jan 2024 22:58:46 +0000   Mon, 08 Jan 2024 22:55:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Jan 2024 22:58:46 +0000   Mon, 08 Jan 2024 22:55:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Jan 2024 22:58:46 +0000   Mon, 08 Jan 2024 22:55:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.75
	  Hostname:    addons-917645
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 f843937cf0b14796be19d31bd601a9f5
	  System UUID:                f843937c-f0b1-4796-be19-d31bd601a9f5
	  Boot ID:                    f2d27173-be34-4ab9-bb38-da5b0aa4fa29
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.11
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-msd9f      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  default                     registry-test                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  default                     test-local-path                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  gadget                      gadget-fb4xp                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  gcp-auth                    gcp-auth-d4c87556c-5d79l                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-fvswh    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         2m43s
	  kube-system                 coredns-5dd5756b68-9nmdz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m54s
	  kube-system                 coredns-5dd5756b68-tf28p                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m54s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 csi-hostpathplugin-mcf7x                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 etcd-addons-917645                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m7s
	  kube-system                 kube-apiserver-addons-917645                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 kube-controller-manager-addons-917645        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 kube-proxy-9hdbm                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  kube-system                 kube-scheduler-addons-917645                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 metrics-server-7c66d45ddc-ltcxd              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m45s
	  kube-system                 nvidia-device-plugin-daemonset-9rk2b         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 registry-cghz4                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 registry-proxy-vnwfs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 snapshot-controller-58dbcc7b99-q5vxq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 snapshot-controller-58dbcc7b99-vj5rp         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 tiller-deploy-7b677967b9-czsq5               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  local-path-storage          local-path-provisioner-78b46b4d5c-zwbhf      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-w2t4g               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     2m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  0 (0%!)(MISSING)
	  memory             658Mi (17%!)(MISSING)  596Mi (15%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m52s  kube-proxy       
	  Normal  Starting                 3m8s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m7s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m7s   kubelet          Node addons-917645 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m7s   kubelet          Node addons-917645 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s   kubelet          Node addons-917645 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s   kubelet          Node addons-917645 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m55s  node-controller  Node addons-917645 event: Registered Node addons-917645 in Controller
	
	
	==> dmesg <==
	[  +4.435219] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.472638] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.129545] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.997274] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.478018] systemd-fstab-generator[552]: Ignoring "noauto" for root device
	[  +0.111042] systemd-fstab-generator[563]: Ignoring "noauto" for root device
	[  +0.131308] systemd-fstab-generator[576]: Ignoring "noauto" for root device
	[  +0.091806] systemd-fstab-generator[587]: Ignoring "noauto" for root device
	[  +0.244400] systemd-fstab-generator[614]: Ignoring "noauto" for root device
	[  +6.275744] systemd-fstab-generator[674]: Ignoring "noauto" for root device
	[  +6.438037] systemd-fstab-generator[882]: Ignoring "noauto" for root device
	[  +8.229658] systemd-fstab-generator[1244]: Ignoring "noauto" for root device
	[Jan 8 22:56] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.010421] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.488418] kauditd_printk_skb: 26 callbacks suppressed
	[ +17.851220] kauditd_printk_skb: 4 callbacks suppressed
	[ +25.056106] kauditd_printk_skb: 18 callbacks suppressed
	[Jan 8 22:57] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.697380] kauditd_printk_skb: 26 callbacks suppressed
	[ +17.710416] kauditd_printk_skb: 18 callbacks suppressed
	[Jan 8 22:58] kauditd_printk_skb: 18 callbacks suppressed
	[ +18.501228] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [c475687559087169dd9d90fed344dbc33d47738952aef86ea0851ad409e554fc] <==
	{"level":"info","ts":"2024-01-08T22:57:13.390641Z","caller":"traceutil/trace.go:171","msg":"trace[104597064] linearizableReadLoop","detail":"{readStateIndex:1040; appliedIndex:1039; }","duration":"243.168708ms","start":"2024-01-08T22:57:13.147459Z","end":"2024-01-08T22:57:13.390627Z","steps":["trace[104597064] 'read index received'  (duration: 232.248896ms)","trace[104597064] 'applied index is now lower than readState.Index'  (duration: 10.919319ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:57:13.390935Z","caller":"traceutil/trace.go:171","msg":"trace[823945688] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"286.67942ms","start":"2024-01-08T22:57:13.104243Z","end":"2024-01-08T22:57:13.390923Z","steps":["trace[823945688] 'process raft request'  (duration: 286.232518ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:57:13.391136Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.712775ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T22:57:13.391156Z","caller":"traceutil/trace.go:171","msg":"trace[763930197] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1005; }","duration":"243.74582ms","start":"2024-01-08T22:57:13.147405Z","end":"2024-01-08T22:57:13.391151Z","steps":["trace[763930197] 'agreement among raft nodes before linearized reading'  (duration: 243.69649ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:57:13.391433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.839746ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:86354"}
	{"level":"info","ts":"2024-01-08T22:57:13.391482Z","caller":"traceutil/trace.go:171","msg":"trace[1257967155] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1005; }","duration":"238.892576ms","start":"2024-01-08T22:57:13.152583Z","end":"2024-01-08T22:57:13.391475Z","steps":["trace[1257967155] 'agreement among raft nodes before linearized reading'  (duration: 238.717846ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:57:13.392084Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.458235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13631"}
	{"level":"info","ts":"2024-01-08T22:57:13.392143Z","caller":"traceutil/trace.go:171","msg":"trace[172930715] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1005; }","duration":"239.517643ms","start":"2024-01-08T22:57:13.152618Z","end":"2024-01-08T22:57:13.392136Z","steps":["trace[172930715] 'agreement among raft nodes before linearized reading'  (duration: 239.433168ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:57:15.330427Z","caller":"traceutil/trace.go:171","msg":"trace[1652361716] transaction","detail":"{read_only:false; response_revision:1017; number_of_response:1; }","duration":"107.055676ms","start":"2024-01-08T22:57:15.223357Z","end":"2024-01-08T22:57:15.330413Z","steps":["trace[1652361716] 'process raft request'  (duration: 101.556739ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:57:17.946657Z","caller":"traceutil/trace.go:171","msg":"trace[293539458] linearizableReadLoop","detail":"{readStateIndex:1079; appliedIndex:1078; }","duration":"128.460309ms","start":"2024-01-08T22:57:17.818183Z","end":"2024-01-08T22:57:17.946643Z","steps":["trace[293539458] 'read index received'  (duration: 127.796395ms)","trace[293539458] 'applied index is now lower than readState.Index'  (duration: 663.132µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-08T22:57:17.946915Z","caller":"traceutil/trace.go:171","msg":"trace[1025263508] transaction","detail":"{read_only:false; response_revision:1043; number_of_response:1; }","duration":"273.054344ms","start":"2024-01-08T22:57:17.673844Z","end":"2024-01-08T22:57:17.946899Z","steps":["trace[1025263508] 'process raft request'  (duration: 269.98868ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:57:17.947087Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.936855ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10572"}
	{"level":"info","ts":"2024-01-08T22:57:17.947106Z","caller":"traceutil/trace.go:171","msg":"trace[631393532] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1043; }","duration":"128.973ms","start":"2024-01-08T22:57:17.818128Z","end":"2024-01-08T22:57:17.947101Z","steps":["trace[631393532] 'agreement among raft nodes before linearized reading'  (duration: 128.899776ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:57:17.947431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.479698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:86416"}
	{"level":"info","ts":"2024-01-08T22:57:17.947458Z","caller":"traceutil/trace.go:171","msg":"trace[326741190] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1043; }","duration":"116.508737ms","start":"2024-01-08T22:57:17.830939Z","end":"2024-01-08T22:57:17.947448Z","steps":["trace[326741190] 'agreement among raft nodes before linearized reading'  (duration: 116.352317ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:57:28.334354Z","caller":"traceutil/trace.go:171","msg":"trace[637983304] linearizableReadLoop","detail":"{readStateIndex:1147; appliedIndex:1146; }","duration":"190.536918ms","start":"2024-01-08T22:57:28.143806Z","end":"2024-01-08T22:57:28.334343Z","steps":["trace[637983304] 'read index received'  (duration: 190.375473ms)","trace[637983304] 'applied index is now lower than readState.Index'  (duration: 161.086µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-08T22:57:28.334586Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.775291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-917645\" ","response":"range_response_count:1 size:5941"}
	{"level":"info","ts":"2024-01-08T22:57:28.33465Z","caller":"traceutil/trace.go:171","msg":"trace[1095113521] range","detail":"{range_begin:/registry/minions/addons-917645; range_end:; response_count:1; response_revision:1109; }","duration":"190.833193ms","start":"2024-01-08T22:57:28.143783Z","end":"2024-01-08T22:57:28.334617Z","steps":["trace[1095113521] 'agreement among raft nodes before linearized reading'  (duration: 190.674281ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:57:28.334678Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.603912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-08T22:57:28.334705Z","caller":"traceutil/trace.go:171","msg":"trace[118049316] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1109; }","duration":"185.634647ms","start":"2024-01-08T22:57:28.149061Z","end":"2024-01-08T22:57:28.334695Z","steps":["trace[118049316] 'agreement among raft nodes before linearized reading'  (duration: 185.585244ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:57:28.334783Z","caller":"traceutil/trace.go:171","msg":"trace[870874010] transaction","detail":"{read_only:false; response_revision:1109; number_of_response:1; }","duration":"206.155725ms","start":"2024-01-08T22:57:28.128621Z","end":"2024-01-08T22:57:28.334777Z","steps":["trace[870874010] 'process raft request'  (duration: 205.591905ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-08T22:57:30.88694Z","caller":"traceutil/trace.go:171","msg":"trace[287856415] transaction","detail":"{read_only:false; response_revision:1116; number_of_response:1; }","duration":"445.227567ms","start":"2024-01-08T22:57:30.441699Z","end":"2024-01-08T22:57:30.886927Z","steps":["trace[287856415] 'process raft request'  (duration: 445.107439ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:57:30.887228Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T22:57:30.441683Z","time spent":"445.396349ms","remote":"127.0.0.1:55326","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1092 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-01-08T22:57:30.888383Z","caller":"traceutil/trace.go:171","msg":"trace[1058756595] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"446.58685ms","start":"2024-01-08T22:57:30.441786Z","end":"2024-01-08T22:57:30.888373Z","steps":["trace[1058756595] 'process raft request'  (duration: 446.260975ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-08T22:57:30.888901Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-08T22:57:30.441684Z","time spent":"447.17222ms","remote":"127.0.0.1:55302","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1109 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> gcp-auth [91f7b129634387fdc713bc3e3748e45db87fa12cd8a4dfd4197dd8d4fbdec266] <==
	2024/01/08 22:58:37 GCP Auth Webhook started!
	2024/01/08 22:58:38 Ready to marshal response ...
	2024/01/08 22:58:38 Ready to write response ...
	2024/01/08 22:58:38 Ready to marshal response ...
	2024/01/08 22:58:38 Ready to write response ...
	2024/01/08 22:58:45 Ready to marshal response ...
	2024/01/08 22:58:45 Ready to write response ...
	2024/01/08 22:58:50 Ready to marshal response ...
	2024/01/08 22:58:50 Ready to write response ...
	
	
	==> kernel <==
	 22:58:51 up 3 min,  0 users,  load average: 0.96, 1.14, 0.52
	Linux addons-917645 5.10.57 #1 SMP Sat Dec 16 11:03:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [49c68ec901e310428d9a3ea37e7b09f918c243848317dbf413991c1b91394892] <==
	I0108 22:56:06.271845       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0108 22:56:06.775713       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:56:06.776008       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:56:07.732681       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.109.147.38"}
	I0108 22:56:07.790280       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.104.59.100"}
	I0108 22:56:07.911713       1 controller.go:624] quota admission added evaluator for: jobs.batch
	W0108 22:56:09.247307       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:56:10.328926       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.106.144.36"}
	I0108 22:56:10.357278       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0108 22:56:10.671739       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.106.59.56"}
	W0108 22:56:12.097451       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:56:13.089278       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.102.166.237"}
	I0108 22:56:39.298608       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0108 22:56:42.012145       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.238.99:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.238.99:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.238.99:443: connect: connection refused
	W0108 22:56:42.012341       1 handler_proxy.go:93] no RequestInfo found in the context
	E0108 22:56:42.012418       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0108 22:56:42.013996       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0108 22:56:42.014376       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.238.99:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.238.99:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.238.99:443: connect: connection refused
	E0108 22:56:42.018083       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.238.99:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.238.99:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.238.99:443: connect: connection refused
	I0108 22:56:42.097372       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:57:39.302858       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0108 22:58:39.303203       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [ccf9e142e94a159c07583650fbec41db3724085c575ed1b653e4f7127888eefd] <==
	I0108 22:57:20.400130       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0108 22:57:20.776891       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0108 22:57:20.788987       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0108 22:57:20.797701       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0108 22:57:20.798021       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0108 22:57:20.828734       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0108 22:57:20.841790       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0108 22:57:20.852239       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0108 22:57:20.852680       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0108 22:57:24.425912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="86.851µs"
	I0108 22:57:42.187817       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="26.837419ms"
	I0108 22:57:42.188178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="225.452µs"
	I0108 22:57:50.021111       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0108 22:57:50.023610       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0108 22:57:50.080461       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0108 22:57:50.081560       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0108 22:58:37.735345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="10.660675ms"
	I0108 22:58:37.735437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="59.5µs"
	I0108 22:58:38.048789       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0108 22:58:38.083940       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 22:58:38.194607       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 22:58:38.196694       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 22:58:40.438040       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 22:58:40.438180       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0108 22:58:44.537253       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [4fa091f9e73128e68e7143f7f5f6d8337327359137b22ac294a75b0b8f3702e9] <==
	I0108 22:55:57.383186       1 server_others.go:69] "Using iptables proxy"
	I0108 22:55:57.396476       1 node.go:141] Successfully retrieved node IP: 192.168.39.75
	I0108 22:55:57.602955       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0108 22:55:57.603000       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0108 22:55:58.086201       1 server_others.go:152] "Using iptables Proxier"
	I0108 22:55:58.086267       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0108 22:55:58.086426       1 server.go:846] "Version info" version="v1.28.4"
	I0108 22:55:58.086434       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0108 22:55:58.087948       1 config.go:188] "Starting service config controller"
	I0108 22:55:58.087982       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0108 22:55:58.087999       1 config.go:97] "Starting endpoint slice config controller"
	I0108 22:55:58.088002       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0108 22:55:58.088602       1 config.go:315] "Starting node config controller"
	I0108 22:55:58.088688       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0108 22:55:58.188681       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0108 22:55:58.188738       1 shared_informer.go:318] Caches are synced for node config
	I0108 22:55:58.188749       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6fa43fa4ec8f0ea7d3066272e4cf9fb30aa2c4b2ec99e5345a6b439d236a20d8] <==
	W0108 22:55:39.488659       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0108 22:55:39.491312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0108 22:55:39.488482       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0108 22:55:39.492265       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0108 22:55:39.488849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:55:39.494555       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0108 22:55:39.495357       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:55:39.495449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 22:55:40.349046       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0108 22:55:40.349164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0108 22:55:40.365310       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0108 22:55:40.366733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0108 22:55:40.404227       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0108 22:55:40.404314       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0108 22:55:40.566447       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0108 22:55:40.566707       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0108 22:55:40.587356       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0108 22:55:40.587594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0108 22:55:40.592681       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0108 22:55:40.592866       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0108 22:55:40.606930       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0108 22:55:40.607184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0108 22:55:40.649182       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0108 22:55:40.649308       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0108 22:55:41.069573       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-01-08 22:55:09 UTC, ends at Mon 2024-01-08 22:58:51 UTC. --
	Jan 08 22:58:44 addons-917645 kubelet[1251]: I0108 22:58:44.740563    1251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f5f9b63aaa422affef58ea5f45879565c1a9af62f8fe955b19399a0ccf24934"
	Jan 08 22:58:44 addons-917645 kubelet[1251]: I0108 22:58:44.912881    1251 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="38e1d6a2-ee1e-430a-a5ca-3a5700879348" path="/var/lib/kubelet/pods/38e1d6a2-ee1e-430a-a5ca-3a5700879348/volumes"
	Jan 08 22:58:45 addons-917645 kubelet[1251]: I0108 22:58:45.207655    1251 topology_manager.go:215] "Topology Admit Handler" podUID="b1516ec2-f75f-47f6-a439-a6ea9374c130" podNamespace="default" podName="test-local-path"
	Jan 08 22:58:45 addons-917645 kubelet[1251]: E0108 22:58:45.208345    1251 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="38e1d6a2-ee1e-430a-a5ca-3a5700879348" containerName="helper-pod"
	Jan 08 22:58:45 addons-917645 kubelet[1251]: I0108 22:58:45.208630    1251 memory_manager.go:346] "RemoveStaleState removing state" podUID="38e1d6a2-ee1e-430a-a5ca-3a5700879348" containerName="helper-pod"
	Jan 08 22:58:45 addons-917645 kubelet[1251]: I0108 22:58:45.252781    1251 topology_manager.go:215] "Topology Admit Handler" podUID="001b248f-45e1-4206-90a3-a06d06fc2c74" podNamespace="default" podName="task-pv-pod"
	Jan 08 22:58:45 addons-917645 kubelet[1251]: I0108 22:58:45.326641    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b1516ec2-f75f-47f6-a439-a6ea9374c130-gcp-creds\") pod \"test-local-path\" (UID: \"b1516ec2-f75f-47f6-a439-a6ea9374c130\") " pod="default/test-local-path"
	Jan 08 22:58:45 addons-917645 kubelet[1251]: I0108 22:58:45.326928    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/001b248f-45e1-4206-90a3-a06d06fc2c74-gcp-creds\") pod \"task-pv-pod\" (UID: \"001b248f-45e1-4206-90a3-a06d06fc2c74\") " pod="default/task-pv-pod"
	Jan 08 22:58:45 addons-917645 kubelet[1251]: I0108 22:58:45.327160    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99fa07b4-2440-40b1-b052-322948fe5cc8\" (UniqueName: \"kubernetes.io/host-path/b1516ec2-f75f-47f6-a439-a6ea9374c130-pvc-99fa07b4-2440-40b1-b052-322948fe5cc8\") pod \"test-local-path\" (UID: \"b1516ec2-f75f-47f6-a439-a6ea9374c130\") " pod="default/test-local-path"
	Jan 08 22:58:45 addons-917645 kubelet[1251]: I0108 22:58:45.327464    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwgtg\" (UniqueName: \"kubernetes.io/projected/b1516ec2-f75f-47f6-a439-a6ea9374c130-kube-api-access-qwgtg\") pod \"test-local-path\" (UID: \"b1516ec2-f75f-47f6-a439-a6ea9374c130\") " pod="default/test-local-path"
	Jan 08 22:58:45 addons-917645 kubelet[1251]: I0108 22:58:45.327710    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bc5283bc-2f49-4130-a2f2-ef39a26f91b0\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^795d58e1-ae79-11ee-8ba7-e2db1609e4b4\") pod \"task-pv-pod\" (UID: \"001b248f-45e1-4206-90a3-a06d06fc2c74\") " pod="default/task-pv-pod"
	Jan 08 22:58:45 addons-917645 kubelet[1251]: I0108 22:58:45.327946    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc8tb\" (UniqueName: \"kubernetes.io/projected/001b248f-45e1-4206-90a3-a06d06fc2c74-kube-api-access-vc8tb\") pod \"task-pv-pod\" (UID: \"001b248f-45e1-4206-90a3-a06d06fc2c74\") " pod="default/task-pv-pod"
	Jan 08 22:58:45 addons-917645 kubelet[1251]: I0108 22:58:45.441265    1251 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-bc5283bc-2f49-4130-a2f2-ef39a26f91b0\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^795d58e1-ae79-11ee-8ba7-e2db1609e4b4\") pod \"task-pv-pod\" (UID: \"001b248f-45e1-4206-90a3-a06d06fc2c74\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/6f8cd1afb71c121bf859d7f65db66f829bc9be5279fa9c96a6c2adf06892b310/globalmount\"" pod="default/task-pv-pod"
	Jan 08 22:58:50 addons-917645 kubelet[1251]: I0108 22:58:50.066871    1251 topology_manager.go:215] "Topology Admit Handler" podUID="405a9d46-ead7-4895-b876-a9d1e03f7376" podNamespace="default" podName="registry-test"
	Jan 08 22:58:50 addons-917645 kubelet[1251]: I0108 22:58:50.169701    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd6tg\" (UniqueName: \"kubernetes.io/projected/405a9d46-ead7-4895-b876-a9d1e03f7376-kube-api-access-vd6tg\") pod \"registry-test\" (UID: \"405a9d46-ead7-4895-b876-a9d1e03f7376\") " pod="default/registry-test"
	Jan 08 22:58:50 addons-917645 kubelet[1251]: I0108 22:58:50.169777    1251 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/405a9d46-ead7-4895-b876-a9d1e03f7376-gcp-creds\") pod \"registry-test\" (UID: \"405a9d46-ead7-4895-b876-a9d1e03f7376\") " pod="default/registry-test"
	Jan 08 22:58:51 addons-917645 kubelet[1251]: I0108 22:58:51.182950    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwgtg\" (UniqueName: \"kubernetes.io/projected/b1516ec2-f75f-47f6-a439-a6ea9374c130-kube-api-access-qwgtg\") pod \"b1516ec2-f75f-47f6-a439-a6ea9374c130\" (UID: \"b1516ec2-f75f-47f6-a439-a6ea9374c130\") "
	Jan 08 22:58:51 addons-917645 kubelet[1251]: I0108 22:58:51.182991    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b1516ec2-f75f-47f6-a439-a6ea9374c130-gcp-creds\") pod \"b1516ec2-f75f-47f6-a439-a6ea9374c130\" (UID: \"b1516ec2-f75f-47f6-a439-a6ea9374c130\") "
	Jan 08 22:58:51 addons-917645 kubelet[1251]: I0108 22:58:51.183017    1251 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b1516ec2-f75f-47f6-a439-a6ea9374c130-pvc-99fa07b4-2440-40b1-b052-322948fe5cc8\") pod \"b1516ec2-f75f-47f6-a439-a6ea9374c130\" (UID: \"b1516ec2-f75f-47f6-a439-a6ea9374c130\") "
	Jan 08 22:58:51 addons-917645 kubelet[1251]: I0108 22:58:51.183122    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1516ec2-f75f-47f6-a439-a6ea9374c130-pvc-99fa07b4-2440-40b1-b052-322948fe5cc8" (OuterVolumeSpecName: "data") pod "b1516ec2-f75f-47f6-a439-a6ea9374c130" (UID: "b1516ec2-f75f-47f6-a439-a6ea9374c130"). InnerVolumeSpecName "pvc-99fa07b4-2440-40b1-b052-322948fe5cc8". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 08 22:58:51 addons-917645 kubelet[1251]: I0108 22:58:51.183440    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1516ec2-f75f-47f6-a439-a6ea9374c130-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b1516ec2-f75f-47f6-a439-a6ea9374c130" (UID: "b1516ec2-f75f-47f6-a439-a6ea9374c130"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 08 22:58:51 addons-917645 kubelet[1251]: I0108 22:58:51.190761    1251 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1516ec2-f75f-47f6-a439-a6ea9374c130-kube-api-access-qwgtg" (OuterVolumeSpecName: "kube-api-access-qwgtg") pod "b1516ec2-f75f-47f6-a439-a6ea9374c130" (UID: "b1516ec2-f75f-47f6-a439-a6ea9374c130"). InnerVolumeSpecName "kube-api-access-qwgtg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 08 22:58:51 addons-917645 kubelet[1251]: I0108 22:58:51.283866    1251 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qwgtg\" (UniqueName: \"kubernetes.io/projected/b1516ec2-f75f-47f6-a439-a6ea9374c130-kube-api-access-qwgtg\") on node \"addons-917645\" DevicePath \"\""
	Jan 08 22:58:51 addons-917645 kubelet[1251]: I0108 22:58:51.283910    1251 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b1516ec2-f75f-47f6-a439-a6ea9374c130-gcp-creds\") on node \"addons-917645\" DevicePath \"\""
	Jan 08 22:58:51 addons-917645 kubelet[1251]: I0108 22:58:51.283930    1251 reconciler_common.go:300] "Volume detached for volume \"pvc-99fa07b4-2440-40b1-b052-322948fe5cc8\" (UniqueName: \"kubernetes.io/host-path/b1516ec2-f75f-47f6-a439-a6ea9374c130-pvc-99fa07b4-2440-40b1-b052-322948fe5cc8\") on node \"addons-917645\" DevicePath \"\""
	
	
	==> storage-provisioner [cedddb8915c0dcd650fe0f3bfac1289c760c231d686a43d897267155412a70ac] <==
	I0108 22:56:07.060283       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0108 22:56:07.090355       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0108 22:56:07.090690       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0108 22:56:07.103069       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0108 22:56:07.103227       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-917645_7f815466-ebda-4c57-97d9-2e20c057bb82!
	I0108 22:56:07.105283       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f9a771e8-bed8-4540-83e9-8bbf834f27ca", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-917645_7f815466-ebda-4c57-97d9-2e20c057bb82 became leader
	I0108 22:56:07.204616       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-917645_7f815466-ebda-4c57-97d9-2e20c057bb82!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-917645 -n addons-917645
helpers_test.go:261: (dbg) Run:  kubectl --context addons-917645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: registry-test task-pv-pod ingress-nginx-admission-create-tbkdj ingress-nginx-admission-patch-nrmj7 helper-pod-delete-pvc-99fa07b4-2440-40b1-b052-322948fe5cc8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/NvidiaDevicePlugin]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-917645 describe pod registry-test task-pv-pod ingress-nginx-admission-create-tbkdj ingress-nginx-admission-patch-nrmj7 helper-pod-delete-pvc-99fa07b4-2440-40b1-b052-322948fe5cc8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-917645 describe pod registry-test task-pv-pod ingress-nginx-admission-create-tbkdj ingress-nginx-admission-patch-nrmj7 helper-pod-delete-pvc-99fa07b4-2440-40b1-b052-322948fe5cc8: exit status 1 (104.196432ms)

                                                
                                                
-- stdout --
	Name:             registry-test
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-917645/192.168.39.75
	Start Time:       Mon, 08 Jan 2024 22:58:50 +0000
	Labels:           run=registry-test
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  registry-test:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vd6tg (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-vd6tg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/registry-test to addons-917645
	  Normal  Pulling    1s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-917645/192.168.39.75
	Start Time:       Mon, 08 Jan 2024 22:58:45 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vc8tb (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-vc8tb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/task-pv-pod to addons-917645
	  Normal  Pulling    6s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tbkdj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-nrmj7" not found
	Error from server (NotFound): pods "helper-pod-delete-pvc-99fa07b4-2440-40b1-b052-322948fe5cc8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-917645 describe pod registry-test task-pv-pod ingress-nginx-admission-create-tbkdj ingress-nginx-admission-patch-nrmj7 helper-pod-delete-pvc-99fa07b4-2440-40b1-b052-322948fe5cc8: exit status 1
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (8.13s)

                                                
                                    

Test pass (274/314)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 60.05
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 52.14
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
17 TestDownloadOnly/v1.29.0-rc.2/json-events 52.1
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.14
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
26 TestBinaryMirror 0.57
27 TestOffline 130.36
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 220.89
34 TestAddons/parallel/Registry 23.89
35 TestAddons/parallel/Ingress 22.3
36 TestAddons/parallel/InspektorGadget 11.87
37 TestAddons/parallel/MetricsServer 7
38 TestAddons/parallel/HelmTiller 18.15
40 TestAddons/parallel/CSI 59.06
41 TestAddons/parallel/Headlamp 15.2
42 TestAddons/parallel/CloudSpanner 5.58
43 TestAddons/parallel/LocalPath 57.78
45 TestAddons/parallel/Yakd 6.01
48 TestAddons/serial/GCPAuth/Namespaces 0.11
49 TestAddons/StoppedEnableDisable 92.52
50 TestCertOptions 54.39
51 TestCertExpiration 328.23
53 TestForceSystemdFlag 100.76
54 TestForceSystemdEnv 52.64
56 TestKVMDriverInstallOrUpdate 8.42
60 TestErrorSpam/setup 50.57
61 TestErrorSpam/start 0.37
62 TestErrorSpam/status 0.75
63 TestErrorSpam/pause 1.51
64 TestErrorSpam/unpause 1.62
65 TestErrorSpam/stop 11.52
68 TestFunctional/serial/CopySyncFile 0
69 TestFunctional/serial/StartWithProxy 100.39
70 TestFunctional/serial/AuditLog 0
71 TestFunctional/serial/SoftStart 6.01
72 TestFunctional/serial/KubeContext 0.04
73 TestFunctional/serial/KubectlGetPods 0.07
76 TestFunctional/serial/CacheCmd/cache/add_remote 3.86
77 TestFunctional/serial/CacheCmd/cache/add_local 3.05
78 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
79 TestFunctional/serial/CacheCmd/cache/list 0.06
80 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
81 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
82 TestFunctional/serial/CacheCmd/cache/delete 0.12
83 TestFunctional/serial/MinikubeKubectlCmd 0.12
84 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
85 TestFunctional/serial/ExtraConfig 43.74
86 TestFunctional/serial/ComponentHealth 0.07
87 TestFunctional/serial/LogsCmd 1.44
88 TestFunctional/serial/LogsFileCmd 1.46
89 TestFunctional/serial/InvalidService 5.53
91 TestFunctional/parallel/ConfigCmd 0.4
92 TestFunctional/parallel/DashboardCmd 20.26
93 TestFunctional/parallel/DryRun 0.3
94 TestFunctional/parallel/InternationalLanguage 0.15
95 TestFunctional/parallel/StatusCmd 0.85
99 TestFunctional/parallel/ServiceCmdConnect 7.45
100 TestFunctional/parallel/AddonsCmd 0.14
101 TestFunctional/parallel/PersistentVolumeClaim 45.47
103 TestFunctional/parallel/SSHCmd 0.41
104 TestFunctional/parallel/CpCmd 1.45
105 TestFunctional/parallel/MySQL 39.62
106 TestFunctional/parallel/FileSync 0.28
107 TestFunctional/parallel/CertSync 1.58
111 TestFunctional/parallel/NodeLabels 0.06
113 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
115 TestFunctional/parallel/License 0.84
116 TestFunctional/parallel/ServiceCmd/DeployApp 13.21
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
118 TestFunctional/parallel/ProfileCmd/profile_list 0.36
119 TestFunctional/parallel/MountCmd/any-port 11.74
120 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
124 TestFunctional/parallel/MountCmd/specific-port 1.7
125 TestFunctional/parallel/ServiceCmd/List 0.46
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
128 TestFunctional/parallel/MountCmd/VerifyCleanup 0.76
129 TestFunctional/parallel/ServiceCmd/Format 0.32
130 TestFunctional/parallel/ServiceCmd/URL 0.31
140 TestFunctional/parallel/Version/short 0.06
141 TestFunctional/parallel/Version/components 0.63
142 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
143 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
144 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
145 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
146 TestFunctional/parallel/ImageCommands/ImageBuild 5.61
147 TestFunctional/parallel/ImageCommands/Setup 2.7
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.53
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.17
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.33
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.29
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.2
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.49
155 TestFunctional/delete_addon-resizer_images 0.06
156 TestFunctional/delete_my-image_image 0.01
157 TestFunctional/delete_minikube_cached_images 0.01
161 TestIngressAddonLegacy/StartLegacyK8sCluster 138.48
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.95
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.55
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.81
168 TestJSONOutput/start/Command 100.58
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.65
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.61
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 7.1
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.21
196 TestMainNoArgs 0.05
197 TestMinikubeProfile 99.88
200 TestMountStart/serial/StartWithMountFirst 28.66
201 TestMountStart/serial/VerifyMountFirst 0.39
202 TestMountStart/serial/StartWithMountSecond 29.73
203 TestMountStart/serial/VerifyMountSecond 0.39
204 TestMountStart/serial/DeleteFirst 0.69
205 TestMountStart/serial/VerifyMountPostDelete 0.39
206 TestMountStart/serial/Stop 1.12
207 TestMountStart/serial/RestartStopped 23.84
208 TestMountStart/serial/VerifyMountPostStop 0.39
211 TestMultiNode/serial/FreshStart2Nodes 181.84
212 TestMultiNode/serial/DeployApp2Nodes 6.56
213 TestMultiNode/serial/PingHostFrom2Pods 0.88
214 TestMultiNode/serial/AddNode 43.22
215 TestMultiNode/serial/MultiNodeLabels 0.06
216 TestMultiNode/serial/ProfileList 0.21
217 TestMultiNode/serial/CopyFile 7.47
218 TestMultiNode/serial/StopNode 2.2
219 TestMultiNode/serial/StartAfterStop 26.09
220 TestMultiNode/serial/RestartKeepsNodes 313.93
221 TestMultiNode/serial/DeleteNode 1.69
222 TestMultiNode/serial/StopMultiNode 183.44
223 TestMultiNode/serial/RestartMultiNode 91.27
224 TestMultiNode/serial/ValidateNameConflict 54.29
229 TestPreload 345.44
231 TestScheduledStopUnix 120.48
235 TestRunningBinaryUpgrade 225.43
237 TestKubernetesUpgrade 184.42
240 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
241 TestNoKubernetes/serial/StartWithK8s 128.09
242 TestNoKubernetes/serial/StartWithStopK8s 18.01
250 TestNetworkPlugins/group/false 3.41
254 TestNoKubernetes/serial/Start 51.92
255 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
256 TestNoKubernetes/serial/ProfileList 1.57
257 TestNoKubernetes/serial/Stop 1.38
258 TestNoKubernetes/serial/StartNoArgs 46.25
259 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
260 TestStoppedBinaryUpgrade/Setup 3.06
261 TestStoppedBinaryUpgrade/Upgrade 165.92
270 TestPause/serial/Start 66.08
271 TestNetworkPlugins/group/auto/Start 103.35
272 TestNetworkPlugins/group/kindnet/Start 91.62
273 TestPause/serial/SecondStartNoReconfiguration 44.66
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
275 TestNetworkPlugins/group/calico/Start 122.17
276 TestPause/serial/Pause 1.19
277 TestPause/serial/VerifyStatus 0.32
278 TestPause/serial/Unpause 0.79
279 TestPause/serial/PauseAgain 1.14
280 TestPause/serial/DeletePaused 0.93
281 TestPause/serial/VerifyDeletedResources 0.43
282 TestNetworkPlugins/group/custom-flannel/Start 104.99
283 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
284 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
285 TestNetworkPlugins/group/kindnet/NetCatPod 12.27
286 TestNetworkPlugins/group/auto/KubeletFlags 0.32
287 TestNetworkPlugins/group/auto/NetCatPod 11.35
288 TestNetworkPlugins/group/auto/DNS 0.24
289 TestNetworkPlugins/group/kindnet/DNS 0.26
290 TestNetworkPlugins/group/auto/Localhost 0.19
291 TestNetworkPlugins/group/kindnet/Localhost 0.22
292 TestNetworkPlugins/group/auto/HairPin 0.21
293 TestNetworkPlugins/group/kindnet/HairPin 0.21
294 TestNetworkPlugins/group/enable-default-cni/Start 115.63
295 TestNetworkPlugins/group/flannel/Start 126.01
296 TestNetworkPlugins/group/calico/ControllerPod 6.01
297 TestNetworkPlugins/group/calico/KubeletFlags 0.24
298 TestNetworkPlugins/group/calico/NetCatPod 13.25
299 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
300 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.32
301 TestNetworkPlugins/group/custom-flannel/DNS 0.21
302 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
303 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
304 TestNetworkPlugins/group/calico/DNS 0.22
305 TestNetworkPlugins/group/calico/Localhost 0.18
306 TestNetworkPlugins/group/calico/HairPin 0.16
307 TestNetworkPlugins/group/bridge/Start 104.52
309 TestStartStop/group/old-k8s-version/serial/FirstStart 156.7
310 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
311 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
312 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
313 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
314 TestNetworkPlugins/group/flannel/ControllerPod 6.01
315 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
316 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
317 TestNetworkPlugins/group/flannel/NetCatPod 9.3
318 TestNetworkPlugins/group/flannel/DNS 0.2
319 TestNetworkPlugins/group/flannel/Localhost 0.16
320 TestNetworkPlugins/group/flannel/HairPin 0.14
322 TestStartStop/group/no-preload/serial/FirstStart 188.02
324 TestStartStop/group/embed-certs/serial/FirstStart 80.3
325 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
326 TestNetworkPlugins/group/bridge/NetCatPod 9.21
327 TestNetworkPlugins/group/bridge/DNS 0.22
328 TestNetworkPlugins/group/bridge/Localhost 0.16
329 TestNetworkPlugins/group/bridge/HairPin 0.16
331 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 111.07
332 TestStartStop/group/old-k8s-version/serial/DeployApp 10.46
333 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.98
334 TestStartStop/group/old-k8s-version/serial/Stop 92.04
335 TestStartStop/group/embed-certs/serial/DeployApp 10.32
336 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
337 TestStartStop/group/embed-certs/serial/Stop 91.72
338 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
340 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.36
341 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
342 TestStartStop/group/old-k8s-version/serial/SecondStart 110.42
343 TestStartStop/group/no-preload/serial/DeployApp 11.31
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
345 TestStartStop/group/no-preload/serial/Stop 102.17
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
347 TestStartStop/group/embed-certs/serial/SecondStart 325.29
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 304.98
350 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 31.01
351 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
352 TestStartStop/group/no-preload/serial/SecondStart 307.8
353 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
354 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
355 TestStartStop/group/old-k8s-version/serial/Pause 2.85
357 TestStartStop/group/newest-cni/serial/FirstStart 65.85
358 TestStartStop/group/newest-cni/serial/DeployApp 0
359 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.23
360 TestStartStop/group/newest-cni/serial/Stop 2.12
361 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
362 TestStartStop/group/newest-cni/serial/SecondStart 46
363 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
365 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
366 TestStartStop/group/newest-cni/serial/Pause 2.54
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 19.01
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
369 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
370 TestStartStop/group/embed-certs/serial/Pause 2.56
371 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
372 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
373 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
374 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.62
375 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
377 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
378 TestStartStop/group/no-preload/serial/Pause 2.43
x
+
TestDownloadOnly/v1.16.0/json-events (60.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-445610 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-445610 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (1m0.051046304s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (60.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-445610
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-445610: exit status 85 (72.519497ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-445610 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |          |
	|         | -p download-only-445610        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:52:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:52:11.508417   15610 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:52:11.508719   15610 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:52:11.508733   15610 out.go:309] Setting ErrFile to fd 2...
	I0108 22:52:11.508738   15610 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:52:11.508932   15610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
	W0108 22:52:11.509040   15610 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-8357/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-8357/.minikube/config/config.json: no such file or directory
	I0108 22:52:11.509632   15610 out.go:303] Setting JSON to true
	I0108 22:52:11.510442   15610 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2049,"bootTime":1704752283,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:52:11.510496   15610 start.go:138] virtualization: kvm guest
	I0108 22:52:11.513274   15610 out.go:97] [download-only-445610] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	W0108 22:52:11.513353   15610 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17830-8357/.minikube/cache/preloaded-tarball: no such file or directory
	I0108 22:52:11.515077   15610 out.go:169] MINIKUBE_LOCATION=17830
	I0108 22:52:11.513393   15610 notify.go:220] Checking for updates...
	I0108 22:52:11.517994   15610 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:52:11.519397   15610 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	I0108 22:52:11.520851   15610 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	I0108 22:52:11.522375   15610 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 22:52:11.524988   15610 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:52:11.525181   15610 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 22:52:11.621339   15610 out.go:97] Using the kvm2 driver based on user configuration
	I0108 22:52:11.621365   15610 start.go:298] selected driver: kvm2
	I0108 22:52:11.621370   15610 start.go:902] validating driver "kvm2" against <nil>
	I0108 22:52:11.621678   15610 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:52:11.621786   15610 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-8357/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:52:11.635599   15610 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:52:11.635650   15610 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I0108 22:52:11.636149   15610 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0108 22:52:11.636309   15610 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I0108 22:52:11.636364   15610 cni.go:84] Creating CNI manager for ""
	I0108 22:52:11.636377   15610 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0108 22:52:11.636386   15610 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0108 22:52:11.636394   15610 start_flags.go:323] config:
	{Name:download-only-445610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-445610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:52:11.636599   15610 iso.go:125] acquiring lock: {Name:mk34e93ce8d707d1ba4f39937867ad6e31ba9f3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:52:11.638430   15610 out.go:97] Downloading VM boot image ...
	I0108 22:52:11.638463   15610 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17830-8357/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I0108 22:52:23.246115   15610 out.go:97] Starting control plane node download-only-445610 in cluster download-only-445610
	I0108 22:52:23.246142   15610 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 22:52:23.398399   15610 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0108 22:52:23.398439   15610 cache.go:56] Caching tarball of preloaded images
	I0108 22:52:23.398576   15610 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 22:52:23.400365   15610 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0108 22:52:23.400378   15610 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0108 22:52:23.560566   15610 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17830-8357/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0108 22:52:39.708149   15610 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0108 22:52:39.708236   15610 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-8357/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0108 22:52:40.606553   15610 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0108 22:52:40.606928   15610 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/download-only-445610/config.json ...
	I0108 22:52:40.606960   15610 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/download-only-445610/config.json: {Name:mkb86833029b8fd08d9b5f8474dcfab7ee5158be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0108 22:52:40.607104   15610 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0108 22:52:40.607285   15610 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17830-8357/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-445610"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (52.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-445610 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-445610 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (52.13906161s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (52.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-445610
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-445610: exit status 85 (71.839377ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-445610 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |          |
	|         | -p download-only-445610        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-445610 | jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |          |
	|         | -p download-only-445610        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:53:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:53:11.634572   15778 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:53:11.634689   15778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:53:11.634697   15778 out.go:309] Setting ErrFile to fd 2...
	I0108 22:53:11.634702   15778 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:53:11.634886   15778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
	W0108 22:53:11.634985   15778 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-8357/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-8357/.minikube/config/config.json: no such file or directory
	I0108 22:53:11.635386   15778 out.go:303] Setting JSON to true
	I0108 22:53:11.636174   15778 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2109,"bootTime":1704752283,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:53:11.636230   15778 start.go:138] virtualization: kvm guest
	I0108 22:53:11.638369   15778 out.go:97] [download-only-445610] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:53:11.640005   15778 out.go:169] MINIKUBE_LOCATION=17830
	I0108 22:53:11.638562   15778 notify.go:220] Checking for updates...
	I0108 22:53:11.642628   15778 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:53:11.644140   15778 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	I0108 22:53:11.645587   15778 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	I0108 22:53:11.646915   15778 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 22:53:11.649688   15778 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:53:11.650094   15778 config.go:182] Loaded profile config "download-only-445610": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0108 22:53:11.650136   15778 start.go:810] api.Load failed for download-only-445610: filestore "download-only-445610": Docker machine "download-only-445610" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:53:11.650204   15778 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 22:53:11.650237   15778 start.go:810] api.Load failed for download-only-445610: filestore "download-only-445610": Docker machine "download-only-445610" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:53:11.680231   15778 out.go:97] Using the kvm2 driver based on existing profile
	I0108 22:53:11.680249   15778 start.go:298] selected driver: kvm2
	I0108 22:53:11.680254   15778 start.go:902] validating driver "kvm2" against &{Name:download-only-445610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-445610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:53:11.680601   15778 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:53:11.680696   15778 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-8357/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:53:11.694070   15778 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:53:11.694727   15778 cni.go:84] Creating CNI manager for ""
	I0108 22:53:11.694743   15778 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0108 22:53:11.694757   15778 start_flags.go:323] config:
	{Name:download-only-445610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-445610 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:53:11.694903   15778 iso.go:125] acquiring lock: {Name:mk34e93ce8d707d1ba4f39937867ad6e31ba9f3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:53:11.696457   15778 out.go:97] Starting control plane node download-only-445610 in cluster download-only-445610
	I0108 22:53:11.696468   15778 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 22:53:12.361067   15778 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0108 22:53:12.361101   15778 cache.go:56] Caching tarball of preloaded images
	I0108 22:53:12.361243   15778 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 22:53:12.363232   15778 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0108 22:53:12.363246   15778 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0108 22:53:12.518332   15778 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:36bbd14dd3f64efb2d3840dd67e48180 -> /home/jenkins/minikube-integration/17830-8357/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0108 22:53:31.151133   15778 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0108 22:53:31.151252   15778 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-8357/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0108 22:53:32.078313   15778 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on containerd
	I0108 22:53:32.078440   15778 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/download-only-445610/config.json ...
	I0108 22:53:32.078638   15778 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0108 22:53:32.078795   15778 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17830-8357/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-445610"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (52.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-445610 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-445610 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (52.099513326s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (52.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-445610
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-445610: exit status 85 (71.579884ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-445610 | jenkins | v1.32.0 | 08 Jan 24 22:52 UTC |          |
	|         | -p download-only-445610           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-445610 | jenkins | v1.32.0 | 08 Jan 24 22:53 UTC |          |
	|         | -p download-only-445610           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-445610 | jenkins | v1.32.0 | 08 Jan 24 22:54 UTC |          |
	|         | -p download-only-445610           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=containerd    |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/08 22:54:03
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0108 22:54:03.845132   15940 out.go:296] Setting OutFile to fd 1 ...
	I0108 22:54:03.845390   15940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:03.845400   15940 out.go:309] Setting ErrFile to fd 2...
	I0108 22:54:03.845408   15940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 22:54:03.845597   15940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
	W0108 22:54:03.845722   15940 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17830-8357/.minikube/config/config.json: open /home/jenkins/minikube-integration/17830-8357/.minikube/config/config.json: no such file or directory
	I0108 22:54:03.846140   15940 out.go:303] Setting JSON to true
	I0108 22:54:03.846935   15940 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2161,"bootTime":1704752283,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 22:54:03.846990   15940 start.go:138] virtualization: kvm guest
	I0108 22:54:03.849106   15940 out.go:97] [download-only-445610] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 22:54:03.850616   15940 out.go:169] MINIKUBE_LOCATION=17830
	I0108 22:54:03.849264   15940 notify.go:220] Checking for updates...
	I0108 22:54:03.853101   15940 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 22:54:03.854438   15940 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	I0108 22:54:03.855606   15940 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	I0108 22:54:03.856826   15940 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0108 22:54:03.859276   15940 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0108 22:54:03.859663   15940 config.go:182] Loaded profile config "download-only-445610": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	W0108 22:54:03.859703   15940 start.go:810] api.Load failed for download-only-445610: filestore "download-only-445610": Docker machine "download-only-445610" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:54:03.859782   15940 driver.go:392] Setting default libvirt URI to qemu:///system
	W0108 22:54:03.859811   15940 start.go:810] api.Load failed for download-only-445610: filestore "download-only-445610": Docker machine "download-only-445610" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0108 22:54:03.891322   15940 out.go:97] Using the kvm2 driver based on existing profile
	I0108 22:54:03.891341   15940 start.go:298] selected driver: kvm2
	I0108 22:54:03.891346   15940 start.go:902] validating driver "kvm2" against &{Name:download-only-445610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-445610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:54:03.891703   15940 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:54:03.891771   15940 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17830-8357/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0108 22:54:03.904964   15940 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0108 22:54:03.905633   15940 cni.go:84] Creating CNI manager for ""
	I0108 22:54:03.905649   15940 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0108 22:54:03.905661   15940 start_flags.go:323] config:
	{Name:download-only-445610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-445610 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 22:54:03.905783   15940 iso.go:125] acquiring lock: {Name:mk34e93ce8d707d1ba4f39937867ad6e31ba9f3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0108 22:54:03.907496   15940 out.go:97] Starting control plane node download-only-445610 in cluster download-only-445610
	I0108 22:54:03.907509   15940 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0108 22:54:04.555984   15940 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0108 22:54:04.556028   15940 cache.go:56] Caching tarball of preloaded images
	I0108 22:54:04.556189   15940 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0108 22:54:04.557917   15940 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0108 22:54:04.557930   15940 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0108 22:54:04.714595   15940 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:24c8d97965ae2515db31ece6a310bbf9 -> /home/jenkins/minikube-integration/17830-8357/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0108 22:54:22.947238   15940 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0108 22:54:22.947330   15940 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17830-8357/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0108 22:54:23.758043   15940 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on containerd
	I0108 22:54:23.758171   15940 profile.go:148] Saving config to /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/download-only-445610/config.json ...
	I0108 22:54:23.758378   15940 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0108 22:54:23.758565   15940 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17830-8357/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-445610"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-445610
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-599788 --alsologtostderr --binary-mirror http://127.0.0.1:33131 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-599788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-599788
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (130.36s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-970626 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-970626 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m9.271597956s)
helpers_test.go:175: Cleaning up "offline-containerd-970626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-970626
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-970626: (1.089947976s)
--- PASS: TestOffline (130.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-917645
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-917645: exit status 85 (64.20346ms)

                                                
                                                
-- stdout --
	* Profile "addons-917645" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-917645"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-917645
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-917645: exit status 85 (61.781537ms)

                                                
                                                
-- stdout --
	* Profile "addons-917645" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-917645"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (220.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-917645 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-917645 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m40.889618097s)
--- PASS: TestAddons/Setup (220.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (23.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 15.483807ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-cghz4" [5cab67d2-562a-4226-8ed1-603f82df4ccd] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.024211335s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vnwfs" [f26fea03-edb3-469e-ac2b-dde7893070a2] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006463356s
addons_test.go:340: (dbg) Run:  kubectl --context addons-917645 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-917645 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-917645 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (11.04215295s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 ip
2024/01/08 22:59:01 [DEBUG] GET http://192.168.39.75:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (23.89s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-917645 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-917645 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-917645 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b21a6565-a943-4585-83fd-8ba270c5406d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b21a6565-a943-4585-83fd-8ba270c5406d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004215197s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-917645 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.75
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-917645 addons disable ingress-dns --alsologtostderr -v=1: (1.216074366s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-917645 addons disable ingress --alsologtostderr -v=1: (7.768588056s)
--- PASS: TestAddons/parallel/Ingress (22.30s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-fb4xp" [2f99e734-f49c-4177-9526-0225077e429f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004701364s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-917645
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-917645: (5.863978908s)
--- PASS: TestAddons/parallel/InspektorGadget (11.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 6.611155ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-ltcxd" [571077f8-dd80-4589-b5ee-6d050e13715d] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.009943738s
addons_test.go:415: (dbg) Run:  kubectl --context addons-917645 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (18.15s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.663304ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-czsq5" [11e488b3-2ed7-40a9-9ca3-595f8137fc7e] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.006031264s
addons_test.go:473: (dbg) Run:  kubectl --context addons-917645 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-917645 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (11.338303792s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (18.15s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 17.521121ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-917645 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-917645 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [001b248f-45e1-4206-90a3-a06d06fc2c74] Pending
helpers_test.go:344: "task-pv-pod" [001b248f-45e1-4206-90a3-a06d06fc2c74] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [001b248f-45e1-4206-90a3-a06d06fc2c74] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.004548483s
addons_test.go:584: (dbg) Run:  kubectl --context addons-917645 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-917645 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-917645 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-917645 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-917645 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-917645 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-917645 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c48cd99e-7e1f-43d9-bd85-40092e7e4ca7] Pending
helpers_test.go:344: "task-pv-pod-restore" [c48cd99e-7e1f-43d9-bd85-40092e7e4ca7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c48cd99e-7e1f-43d9-bd85-40092e7e4ca7] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004777203s
addons_test.go:626: (dbg) Run:  kubectl --context addons-917645 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-917645 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-917645 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-917645 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.725915828s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-917645 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-917645 --alsologtostderr -v=1: (1.186742875s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-sbvf8" [3208dd64-3d7a-40f8-abce-8e23109870e2] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-sbvf8" [3208dd64-3d7a-40f8-abce-8e23109870e2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-sbvf8" [3208dd64-3d7a-40f8-abce-8e23109870e2] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.007615543s
--- PASS: TestAddons/parallel/Headlamp (15.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-msd9f" [3ae23b0f-8fde-4e8f-83c1-3418a8ddc609] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005344931s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-917645
--- PASS: TestAddons/parallel/CloudSpanner (5.58s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-917645 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-917645 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-917645 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b1516ec2-f75f-47f6-a439-a6ea9374c130] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b1516ec2-f75f-47f6-a439-a6ea9374c130] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b1516ec2-f75f-47f6-a439-a6ea9374c130] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004563934s
addons_test.go:891: (dbg) Run:  kubectl --context addons-917645 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 ssh "cat /opt/local-path-provisioner/pvc-99fa07b4-2440-40b1-b052-322948fe5cc8_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-917645 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-917645 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-917645 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-917645 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.871917549s)
--- PASS: TestAddons/parallel/LocalPath (57.78s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-w2t4g" [b6f705a0-da60-472b-9fe4-dbee6af96144] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003762893s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-917645 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-917645 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.52s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-917645
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-917645: (1m32.221292757s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-917645
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-917645
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-917645
--- PASS: TestAddons/StoppedEnableDisable (92.52s)

                                                
                                    
x
+
TestCertOptions (54.39s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-334229 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-334229 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (52.88285329s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-334229 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-334229 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-334229 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-334229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-334229
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-334229: (1.016793989s)
--- PASS: TestCertOptions (54.39s)

                                                
                                    
x
+
TestCertExpiration (328.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-092911 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-092911 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m39.372339484s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-092911 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-092911 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (47.449828131s)
helpers_test.go:175: Cleaning up "cert-expiration-092911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-092911
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-092911: (1.405734856s)
--- PASS: TestCertExpiration (328.23s)

                                                
                                    
x
+
TestForceSystemdFlag (100.76s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-463023 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0108 23:38:37.895029   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:38:59.339582   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-463023 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m39.511487197s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-463023 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-463023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-463023
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-463023: (1.03667251s)
--- PASS: TestForceSystemdFlag (100.76s)

                                                
                                    
x
+
TestForceSystemdEnv (52.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-015167 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-015167 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (50.595509467s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-015167 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-015167" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-015167
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-015167: (1.825480029s)
--- PASS: TestForceSystemdEnv (52.64s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.42s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (8.42s)

                                                
                                    
x
+
TestErrorSpam/setup (50.57s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-678072 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-678072 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-678072 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-678072 --driver=kvm2  --container-runtime=containerd: (50.566042348s)
--- PASS: TestErrorSpam/setup (50.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 unpause
--- PASS: TestErrorSpam/unpause (1.62s)

                                                
                                    
x
+
TestErrorSpam/stop (11.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 stop: (11.361021079s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678072 --log_dir /tmp/nospam-678072 stop
--- PASS: TestErrorSpam/stop (11.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/17830-8357/.minikube/files/etc/test/nested/copy/15598/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (100.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817032 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0108 23:03:37.895356   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:37.901013   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:37.911287   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:37.931538   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:37.971834   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:38.052144   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:38.212550   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:38.533132   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:39.174128   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:40.455212   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:43.016566   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:48.136962   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:03:58.377321   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:04:18.857592   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-817032 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m40.387873048s)
--- PASS: TestFunctional/serial/StartWithProxy (100.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817032 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-817032 --alsologtostderr -v=8: (6.014048027s)
functional_test.go:659: soft start took 6.014568319s for "functional-817032" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-817032 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 cache add registry.k8s.io/pause:3.1: (1.227104728s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 cache add registry.k8s.io/pause:3.3: (1.317727174s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 cache add registry.k8s.io/pause:latest: (1.311681891s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-817032 /tmp/TestFunctionalserialCacheCmdcacheadd_local3251896675/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 cache add minikube-local-cache-test:functional-817032
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 cache add minikube-local-cache-test:functional-817032: (2.728230405s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 cache delete minikube-local-cache-test:functional-817032
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-817032
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817032 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (218.580335ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 cache reload: (1.151061438s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 kubectl -- --context functional-817032 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-817032 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817032 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0108 23:04:59.818404   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-817032 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.740959026s)
functional_test.go:757: restart took 43.741093747s for "functional-817032" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-817032 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 logs: (1.438530071s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 logs --file /tmp/TestFunctionalserialLogsFileCmd15861934/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 logs --file /tmp/TestFunctionalserialLogsFileCmd15861934/001/logs.txt: (1.455583373s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.53s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-817032 apply -f testdata/invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-817032
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-817032: exit status 115 (290.706828ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.26:30444 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-817032 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-817032 delete -f testdata/invalidsvc.yaml: (2.055714945s)
--- PASS: TestFunctional/serial/InvalidService (5.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817032 config get cpus: exit status 14 (75.054627ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817032 config get cpus: exit status 14 (54.336171ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-817032 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-817032 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 22224: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817032 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-817032 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (150.422166ms)

                                                
                                                
-- stdout --
	* [functional-817032] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:05:30.673254   21773 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:05:30.673527   21773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:05:30.673539   21773 out.go:309] Setting ErrFile to fd 2...
	I0108 23:05:30.673547   21773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:05:30.673781   21773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
	I0108 23:05:30.674363   21773 out.go:303] Setting JSON to false
	I0108 23:05:30.675306   21773 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2848,"bootTime":1704752283,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:05:30.675403   21773 start.go:138] virtualization: kvm guest
	I0108 23:05:30.677670   21773 out.go:177] * [functional-817032] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:05:30.679311   21773 notify.go:220] Checking for updates...
	I0108 23:05:30.679321   21773 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:05:30.680760   21773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:05:30.682233   21773 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	I0108 23:05:30.683643   21773 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	I0108 23:05:30.685118   21773 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:05:30.686393   21773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:05:30.688191   21773 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 23:05:30.688793   21773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:05:30.688848   21773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:05:30.703476   21773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0108 23:05:30.703855   21773 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:05:30.704357   21773 main.go:141] libmachine: Using API Version  1
	I0108 23:05:30.704376   21773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:05:30.704785   21773 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:05:30.704933   21773 main.go:141] libmachine: (functional-817032) Calling .DriverName
	I0108 23:05:30.705144   21773 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:05:30.705426   21773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:05:30.705458   21773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:05:30.720516   21773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0108 23:05:30.720913   21773 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:05:30.721362   21773 main.go:141] libmachine: Using API Version  1
	I0108 23:05:30.721385   21773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:05:30.721654   21773 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:05:30.721809   21773 main.go:141] libmachine: (functional-817032) Calling .DriverName
	I0108 23:05:30.753936   21773 out.go:177] * Using the kvm2 driver based on existing profile
	I0108 23:05:30.755155   21773 start.go:298] selected driver: kvm2
	I0108 23:05:30.755168   21773 start.go:902] validating driver "kvm2" against &{Name:functional-817032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-817032 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.26 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:05:30.755275   21773 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:05:30.757382   21773 out.go:177] 
	W0108 23:05:30.758695   21773 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0108 23:05:30.760157   21773 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817032 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-817032 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-817032 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (151.385376ms)

                                                
                                                
-- stdout --
	* [functional-817032] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:05:30.517065   21740 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:05:30.517291   21740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:05:30.517299   21740 out.go:309] Setting ErrFile to fd 2...
	I0108 23:05:30.517304   21740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:05:30.517580   21740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
	I0108 23:05:30.518088   21740 out.go:303] Setting JSON to false
	I0108 23:05:30.518970   21740 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2848,"bootTime":1704752283,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:05:30.519037   21740 start.go:138] virtualization: kvm guest
	I0108 23:05:30.521367   21740 out.go:177] * [functional-817032] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0108 23:05:30.522844   21740 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:05:30.524207   21740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:05:30.522873   21740 notify.go:220] Checking for updates...
	I0108 23:05:30.526741   21740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	I0108 23:05:30.527959   21740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	I0108 23:05:30.529260   21740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:05:30.530510   21740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:05:30.532146   21740 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 23:05:30.532604   21740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:05:30.532673   21740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:05:30.547058   21740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34471
	I0108 23:05:30.547517   21740 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:05:30.548005   21740 main.go:141] libmachine: Using API Version  1
	I0108 23:05:30.548025   21740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:05:30.548365   21740 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:05:30.548510   21740 main.go:141] libmachine: (functional-817032) Calling .DriverName
	I0108 23:05:30.548745   21740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:05:30.549415   21740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:05:30.550427   21740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:05:30.564780   21740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34359
	I0108 23:05:30.565138   21740 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:05:30.565539   21740 main.go:141] libmachine: Using API Version  1
	I0108 23:05:30.565558   21740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:05:30.565850   21740 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:05:30.566021   21740 main.go:141] libmachine: (functional-817032) Calling .DriverName
	I0108 23:05:30.601391   21740 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0108 23:05:30.602851   21740 start.go:298] selected driver: kvm2
	I0108 23:05:30.602862   21740 start.go:902] validating driver "kvm2" against &{Name:functional-817032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704751654-17830@sha256:cabd32f8d9e8d804966eb117ed5366660f6363a4d1415f0b5480de6e396be617 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-817032 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.26 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I0108 23:05:30.602952   21740 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:05:30.605058   21740 out.go:177] 
	W0108 23:05:30.606856   21740 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0108 23:05:30.608033   21740 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-817032 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-817032 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-q47z7" [deea50dc-01da-4c45-81e5-63649d2ce931] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-q47z7" [deea50dc-01da-4c45-81e5-63649d2ce931] Running
2024/01/08 23:05:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004688474s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.26:32193
functional_test.go:1674: http://192.168.39.26:32193: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-q47z7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.26:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.26:32193
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.45s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cecf28ba-a708-4bac-90bd-861c12761705] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004470244s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-817032 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-817032 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-817032 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-817032 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3f4da558-fafd-4a71-a63f-2fbcec79ab7f] Pending
helpers_test.go:344: "sp-pod" [3f4da558-fafd-4a71-a63f-2fbcec79ab7f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3f4da558-fafd-4a71-a63f-2fbcec79ab7f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004682845s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-817032 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-817032 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-817032 delete -f testdata/storage-provisioner/pod.yaml: (1.660704039s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-817032 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [596ea2e9-00b7-4b81-a208-60658154b345] Pending
helpers_test.go:344: "sp-pod" [596ea2e9-00b7-4b81-a208-60658154b345] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [596ea2e9-00b7-4b81-a208-60658154b345] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.00778633s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-817032 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh -n functional-817032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 cp functional-817032:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2422672484/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh -n functional-817032 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh -n functional-817032 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (39.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-817032 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-mmczc" [44b13fda-4b6f-4aeb-810e-11bea90c5489] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-mmczc" [44b13fda-4b6f-4aeb-810e-11bea90c5489] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 32.010286719s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-817032 exec mysql-859648c796-mmczc -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-817032 exec mysql-859648c796-mmczc -- mysql -ppassword -e "show databases;": exit status 1 (209.330943ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-817032 exec mysql-859648c796-mmczc -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-817032 exec mysql-859648c796-mmczc -- mysql -ppassword -e "show databases;": exit status 1 (189.038792ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-817032 exec mysql-859648c796-mmczc -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-817032 exec mysql-859648c796-mmczc -- mysql -ppassword -e "show databases;": exit status 1 (221.947959ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-817032 exec mysql-859648c796-mmczc -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-817032 exec mysql-859648c796-mmczc -- mysql -ppassword -e "show databases;": exit status 1 (124.718195ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0108 23:06:21.738752   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
functional_test.go:1806: (dbg) Run:  kubectl --context functional-817032 exec mysql-859648c796-mmczc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (39.62s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/15598/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "sudo cat /etc/test/nested/copy/15598/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/15598.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "sudo cat /etc/ssl/certs/15598.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/15598.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "sudo cat /usr/share/ca-certificates/15598.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/155982.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "sudo cat /etc/ssl/certs/155982.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/155982.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "sudo cat /usr/share/ca-certificates/155982.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-817032 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817032 ssh "sudo systemctl is-active docker": exit status 1 (205.614802ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817032 ssh "sudo systemctl is-active crio": exit status 1 (229.49551ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-817032 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-817032 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-bzsnz" [7a71c540-4d9a-4d0c-879b-3beef67e4542] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-bzsnz" [7a71c540-4d9a-4d0c-879b-3beef67e4542] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.004177042s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "294.454211ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "64.22067ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-817032 /tmp/TestFunctionalparallelMountCmdany-port854289523/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1704755129856360418" to /tmp/TestFunctionalparallelMountCmdany-port854289523/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1704755129856360418" to /tmp/TestFunctionalparallelMountCmdany-port854289523/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1704755129856360418" to /tmp/TestFunctionalparallelMountCmdany-port854289523/001/test-1704755129856360418
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817032 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (259.843372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan  8 23:05 created-by-test
-rw-r--r-- 1 docker docker 24 Jan  8 23:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan  8 23:05 test-1704755129856360418
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh cat /mount-9p/test-1704755129856360418
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-817032 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [271c468c-0e86-426b-ac8d-5f663ac45705] Pending
helpers_test.go:344: "busybox-mount" [271c468c-0e86-426b-ac8d-5f663ac45705] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [271c468c-0e86-426b-ac8d-5f663ac45705] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [271c468c-0e86-426b-ac8d-5f663ac45705] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.005935844s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-817032 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817032 /tmp/TestFunctionalparallelMountCmdany-port854289523/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "223.04957ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "56.582611ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-817032 /tmp/TestFunctionalparallelMountCmdspecific-port1758340471/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817032 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (196.014108ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817032 /tmp/TestFunctionalparallelMountCmdspecific-port1758340471/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817032 ssh "sudo umount -f /mount-9p": exit status 1 (229.056245ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-817032 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817032 /tmp/TestFunctionalparallelMountCmdspecific-port1758340471/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 service list -o json
functional_test.go:1493: Took "495.684873ms" to run "out/minikube-linux-amd64 -p functional-817032 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.26:30841
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-817032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3385433774/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-817032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3385433774/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-817032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3385433774/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-817032 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3385433774/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3385433774/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-817032 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3385433774/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.26:30841
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-817032 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-817032
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-817032
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-817032 image ls --format short --alsologtostderr:
I0108 23:06:14.518814   23717 out.go:296] Setting OutFile to fd 1 ...
I0108 23:06:14.518952   23717 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:06:14.518962   23717 out.go:309] Setting ErrFile to fd 2...
I0108 23:06:14.518969   23717 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:06:14.519145   23717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
I0108 23:06:14.519761   23717 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 23:06:14.519889   23717 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 23:06:14.520433   23717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0108 23:06:14.520485   23717 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:06:14.534782   23717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
I0108 23:06:14.535231   23717 main.go:141] libmachine: () Calling .GetVersion
I0108 23:06:14.535733   23717 main.go:141] libmachine: Using API Version  1
I0108 23:06:14.535756   23717 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:06:14.536110   23717 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:06:14.536327   23717 main.go:141] libmachine: (functional-817032) Calling .GetState
I0108 23:06:14.538548   23717 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0108 23:06:14.538596   23717 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:06:14.552681   23717 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35277
I0108 23:06:14.553035   23717 main.go:141] libmachine: () Calling .GetVersion
I0108 23:06:14.553450   23717 main.go:141] libmachine: Using API Version  1
I0108 23:06:14.553470   23717 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:06:14.553800   23717 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:06:14.553996   23717 main.go:141] libmachine: (functional-817032) Calling .DriverName
I0108 23:06:14.554220   23717 ssh_runner.go:195] Run: systemctl --version
I0108 23:06:14.554247   23717 main.go:141] libmachine: (functional-817032) Calling .GetSSHHostname
I0108 23:06:14.557646   23717 main.go:141] libmachine: (functional-817032) DBG | domain functional-817032 has defined MAC address 52:54:00:53:a0:5b in network mk-functional-817032
I0108 23:06:14.557968   23717 main.go:141] libmachine: (functional-817032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:a0:5b", ip: ""} in network mk-functional-817032: {Iface:virbr1 ExpiryTime:2024-01-09 00:02:56 +0000 UTC Type:0 Mac:52:54:00:53:a0:5b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-817032 Clientid:01:52:54:00:53:a0:5b}
I0108 23:06:14.558004   23717 main.go:141] libmachine: (functional-817032) DBG | domain functional-817032 has defined IP address 192.168.39.26 and MAC address 52:54:00:53:a0:5b in network mk-functional-817032
I0108 23:06:14.558329   23717 main.go:141] libmachine: (functional-817032) Calling .GetSSHPort
I0108 23:06:14.558476   23717 main.go:141] libmachine: (functional-817032) Calling .GetSSHKeyPath
I0108 23:06:14.558641   23717 main.go:141] libmachine: (functional-817032) Calling .GetSSHUsername
I0108 23:06:14.558810   23717 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/functional-817032/id_rsa Username:docker}
I0108 23:06:14.645422   23717 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 23:06:14.699477   23717 main.go:141] libmachine: Making call to close driver server
I0108 23:06:14.699497   23717 main.go:141] libmachine: (functional-817032) Calling .Close
I0108 23:06:14.699808   23717 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:06:14.699839   23717 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:06:14.699847   23717 main.go:141] libmachine: Making call to close driver server
I0108 23:06:14.699855   23717 main.go:141] libmachine: (functional-817032) Calling .Close
I0108 23:06:14.700175   23717 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:06:14.700199   23717 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:06:14.700214   23717 main.go:141] libmachine: (functional-817032) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-817032 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| gcr.io/google-containers/addon-resizer      | functional-817032  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/library/minikube-local-cache-test | functional-817032  | sha256:0160e6 | 1.01kB |
| docker.io/library/nginx                     | latest             | sha256:d453dd | 70.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-817032 image ls --format table --alsologtostderr:
I0108 23:06:15.035489   23828 out.go:296] Setting OutFile to fd 1 ...
I0108 23:06:15.035638   23828 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:06:15.035646   23828 out.go:309] Setting ErrFile to fd 2...
I0108 23:06:15.035651   23828 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:06:15.035836   23828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
I0108 23:06:15.036379   23828 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 23:06:15.036482   23828 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 23:06:15.036870   23828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0108 23:06:15.036916   23828 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:06:15.050733   23828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46761
I0108 23:06:15.051067   23828 main.go:141] libmachine: () Calling .GetVersion
I0108 23:06:15.051680   23828 main.go:141] libmachine: Using API Version  1
I0108 23:06:15.051705   23828 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:06:15.052001   23828 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:06:15.052187   23828 main.go:141] libmachine: (functional-817032) Calling .GetState
I0108 23:06:15.054130   23828 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0108 23:06:15.054163   23828 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:06:15.067206   23828 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41441
I0108 23:06:15.067575   23828 main.go:141] libmachine: () Calling .GetVersion
I0108 23:06:15.068040   23828 main.go:141] libmachine: Using API Version  1
I0108 23:06:15.068076   23828 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:06:15.068359   23828 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:06:15.068547   23828 main.go:141] libmachine: (functional-817032) Calling .DriverName
I0108 23:06:15.068747   23828 ssh_runner.go:195] Run: systemctl --version
I0108 23:06:15.068768   23828 main.go:141] libmachine: (functional-817032) Calling .GetSSHHostname
I0108 23:06:15.071250   23828 main.go:141] libmachine: (functional-817032) DBG | domain functional-817032 has defined MAC address 52:54:00:53:a0:5b in network mk-functional-817032
I0108 23:06:15.071612   23828 main.go:141] libmachine: (functional-817032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:a0:5b", ip: ""} in network mk-functional-817032: {Iface:virbr1 ExpiryTime:2024-01-09 00:02:56 +0000 UTC Type:0 Mac:52:54:00:53:a0:5b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-817032 Clientid:01:52:54:00:53:a0:5b}
I0108 23:06:15.071645   23828 main.go:141] libmachine: (functional-817032) DBG | domain functional-817032 has defined IP address 192.168.39.26 and MAC address 52:54:00:53:a0:5b in network mk-functional-817032
I0108 23:06:15.071767   23828 main.go:141] libmachine: (functional-817032) Calling .GetSSHPort
I0108 23:06:15.071920   23828 main.go:141] libmachine: (functional-817032) Calling .GetSSHKeyPath
I0108 23:06:15.072061   23828 main.go:141] libmachine: (functional-817032) Calling .GetSSHUsername
I0108 23:06:15.072182   23828 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/functional-817032/id_rsa Username:docker}
I0108 23:06:15.156023   23828 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 23:06:15.227414   23828 main.go:141] libmachine: Making call to close driver server
I0108 23:06:15.227436   23828 main.go:141] libmachine: (functional-817032) Calling .Close
I0108 23:06:15.227732   23828 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:06:15.227759   23828 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:06:15.227769   23828 main.go:141] libmachine: Making call to close driver server
I0108 23:06:15.227778   23828 main.go:141] libmachine: (functional-817032) Calling .Close
I0108 23:06:15.228007   23828 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:06:15.228026   23828 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-817032 image ls --format json --alsologtostderr:
[{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"
18834488"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-817032"],"size":"10823156"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6b
c963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c
321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"siz
e":"27737299"},{"id":"sha256:0160e6f2292c46c5bc5bde893235d7eaa610a5bf0fe23497e60d997228ba516a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-817032"],"size":"1007"},{"id":"sha256:d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9","repoDigests":["docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026"],"repoTags":["docker.io/library/nginx:latest"],"size":"70519830"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-817032 image ls --format json --alsologtostderr:
I0108 23:06:14.792844   23771 out.go:296] Setting OutFile to fd 1 ...
I0108 23:06:14.793093   23771 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:06:14.793103   23771 out.go:309] Setting ErrFile to fd 2...
I0108 23:06:14.793111   23771 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:06:14.793297   23771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
I0108 23:06:14.793867   23771 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 23:06:14.793995   23771 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 23:06:14.794433   23771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0108 23:06:14.794476   23771 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:06:14.808488   23771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39173
I0108 23:06:14.808921   23771 main.go:141] libmachine: () Calling .GetVersion
I0108 23:06:14.809459   23771 main.go:141] libmachine: Using API Version  1
I0108 23:06:14.809482   23771 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:06:14.809807   23771 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:06:14.809972   23771 main.go:141] libmachine: (functional-817032) Calling .GetState
I0108 23:06:14.811667   23771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0108 23:06:14.811699   23771 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:06:14.824553   23771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38057
I0108 23:06:14.824835   23771 main.go:141] libmachine: () Calling .GetVersion
I0108 23:06:14.825276   23771 main.go:141] libmachine: Using API Version  1
I0108 23:06:14.825322   23771 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:06:14.825577   23771 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:06:14.825752   23771 main.go:141] libmachine: (functional-817032) Calling .DriverName
I0108 23:06:14.825912   23771 ssh_runner.go:195] Run: systemctl --version
I0108 23:06:14.825938   23771 main.go:141] libmachine: (functional-817032) Calling .GetSSHHostname
I0108 23:06:14.828180   23771 main.go:141] libmachine: (functional-817032) DBG | domain functional-817032 has defined MAC address 52:54:00:53:a0:5b in network mk-functional-817032
I0108 23:06:14.828505   23771 main.go:141] libmachine: (functional-817032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:a0:5b", ip: ""} in network mk-functional-817032: {Iface:virbr1 ExpiryTime:2024-01-09 00:02:56 +0000 UTC Type:0 Mac:52:54:00:53:a0:5b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-817032 Clientid:01:52:54:00:53:a0:5b}
I0108 23:06:14.828536   23771 main.go:141] libmachine: (functional-817032) DBG | domain functional-817032 has defined IP address 192.168.39.26 and MAC address 52:54:00:53:a0:5b in network mk-functional-817032
I0108 23:06:14.828599   23771 main.go:141] libmachine: (functional-817032) Calling .GetSSHPort
I0108 23:06:14.828769   23771 main.go:141] libmachine: (functional-817032) Calling .GetSSHKeyPath
I0108 23:06:14.828909   23771 main.go:141] libmachine: (functional-817032) Calling .GetSSHUsername
I0108 23:06:14.829022   23771 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/functional-817032/id_rsa Username:docker}
I0108 23:06:14.921329   23771 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 23:06:14.968666   23771 main.go:141] libmachine: Making call to close driver server
I0108 23:06:14.968682   23771 main.go:141] libmachine: (functional-817032) Calling .Close
I0108 23:06:14.968956   23771 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:06:14.968975   23771 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:06:14.968976   23771 main.go:141] libmachine: (functional-817032) DBG | Closing plugin on server side
I0108 23:06:14.968988   23771 main.go:141] libmachine: Making call to close driver server
I0108 23:06:14.968999   23771 main.go:141] libmachine: (functional-817032) Calling .Close
I0108 23:06:14.969272   23771 main.go:141] libmachine: (functional-817032) DBG | Closing plugin on server side
I0108 23:06:14.969357   23771 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:06:14.969376   23771 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-817032 image ls --format yaml --alsologtostderr:
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-817032
size: "10823156"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0160e6f2292c46c5bc5bde893235d7eaa610a5bf0fe23497e60d997228ba516a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-817032
size: "1007"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:d453dd892d9357f3559b967478ae9cbc417b52de66b53142f6c16c8a275486b9
repoDigests:
- docker.io/library/nginx@sha256:2bdc49f2f8ae8d8dc50ed00f2ee56d00385c6f8bc8a8b320d0a294d9e3b49026
repoTags:
- docker.io/library/nginx:latest
size: "70519830"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-817032 image ls --format yaml --alsologtostderr:
I0108 23:06:14.519877   23716 out.go:296] Setting OutFile to fd 1 ...
I0108 23:06:14.519980   23716 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:06:14.519990   23716 out.go:309] Setting ErrFile to fd 2...
I0108 23:06:14.519995   23716 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:06:14.520187   23716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
I0108 23:06:14.521058   23716 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 23:06:14.521169   23716 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 23:06:14.521622   23716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0108 23:06:14.521666   23716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:06:14.537433   23716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36993
I0108 23:06:14.537856   23716 main.go:141] libmachine: () Calling .GetVersion
I0108 23:06:14.538445   23716 main.go:141] libmachine: Using API Version  1
I0108 23:06:14.538475   23716 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:06:14.538775   23716 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:06:14.538955   23716 main.go:141] libmachine: (functional-817032) Calling .GetState
I0108 23:06:14.540659   23716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0108 23:06:14.540694   23716 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:06:14.554139   23716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
I0108 23:06:14.554524   23716 main.go:141] libmachine: () Calling .GetVersion
I0108 23:06:14.554998   23716 main.go:141] libmachine: Using API Version  1
I0108 23:06:14.555022   23716 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:06:14.555344   23716 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:06:14.555560   23716 main.go:141] libmachine: (functional-817032) Calling .DriverName
I0108 23:06:14.555769   23716 ssh_runner.go:195] Run: systemctl --version
I0108 23:06:14.555804   23716 main.go:141] libmachine: (functional-817032) Calling .GetSSHHostname
I0108 23:06:14.559062   23716 main.go:141] libmachine: (functional-817032) DBG | domain functional-817032 has defined MAC address 52:54:00:53:a0:5b in network mk-functional-817032
I0108 23:06:14.559448   23716 main.go:141] libmachine: (functional-817032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:a0:5b", ip: ""} in network mk-functional-817032: {Iface:virbr1 ExpiryTime:2024-01-09 00:02:56 +0000 UTC Type:0 Mac:52:54:00:53:a0:5b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-817032 Clientid:01:52:54:00:53:a0:5b}
I0108 23:06:14.559480   23716 main.go:141] libmachine: (functional-817032) DBG | domain functional-817032 has defined IP address 192.168.39.26 and MAC address 52:54:00:53:a0:5b in network mk-functional-817032
I0108 23:06:14.559592   23716 main.go:141] libmachine: (functional-817032) Calling .GetSSHPort
I0108 23:06:14.559744   23716 main.go:141] libmachine: (functional-817032) Calling .GetSSHKeyPath
I0108 23:06:14.559880   23716 main.go:141] libmachine: (functional-817032) Calling .GetSSHUsername
I0108 23:06:14.560022   23716 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/functional-817032/id_rsa Username:docker}
I0108 23:06:14.651531   23716 ssh_runner.go:195] Run: sudo crictl images --output json
I0108 23:06:14.722321   23716 main.go:141] libmachine: Making call to close driver server
I0108 23:06:14.722344   23716 main.go:141] libmachine: (functional-817032) Calling .Close
I0108 23:06:14.722654   23716 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:06:14.722679   23716 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:06:14.722692   23716 main.go:141] libmachine: Making call to close driver server
I0108 23:06:14.722701   23716 main.go:141] libmachine: (functional-817032) Calling .Close
I0108 23:06:14.722930   23716 main.go:141] libmachine: (functional-817032) DBG | Closing plugin on server side
I0108 23:06:14.722982   23716 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:06:14.722996   23716 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-817032 ssh pgrep buildkitd: exit status 1 (219.993421ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image build -t localhost/my-image:functional-817032 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 image build -t localhost/my-image:functional-817032 testdata/build --alsologtostderr: (5.176680415s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-817032 image build -t localhost/my-image:functional-817032 testdata/build --alsologtostderr:
I0108 23:06:14.985630   23817 out.go:296] Setting OutFile to fd 1 ...
I0108 23:06:14.985885   23817 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:06:14.985898   23817 out.go:309] Setting ErrFile to fd 2...
I0108 23:06:14.985902   23817 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 23:06:14.986080   23817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
I0108 23:06:14.986608   23817 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 23:06:14.987159   23817 config.go:182] Loaded profile config "functional-817032": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0108 23:06:14.987562   23817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0108 23:06:14.987598   23817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:06:15.002921   23817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
I0108 23:06:15.003309   23817 main.go:141] libmachine: () Calling .GetVersion
I0108 23:06:15.003866   23817 main.go:141] libmachine: Using API Version  1
I0108 23:06:15.003901   23817 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:06:15.004224   23817 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:06:15.004420   23817 main.go:141] libmachine: (functional-817032) Calling .GetState
I0108 23:06:15.006410   23817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0108 23:06:15.006443   23817 main.go:141] libmachine: Launching plugin server for driver kvm2
I0108 23:06:15.026022   23817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35521
I0108 23:06:15.026412   23817 main.go:141] libmachine: () Calling .GetVersion
I0108 23:06:15.026866   23817 main.go:141] libmachine: Using API Version  1
I0108 23:06:15.026888   23817 main.go:141] libmachine: () Calling .SetConfigRaw
I0108 23:06:15.027189   23817 main.go:141] libmachine: () Calling .GetMachineName
I0108 23:06:15.027363   23817 main.go:141] libmachine: (functional-817032) Calling .DriverName
I0108 23:06:15.027547   23817 ssh_runner.go:195] Run: systemctl --version
I0108 23:06:15.027576   23817 main.go:141] libmachine: (functional-817032) Calling .GetSSHHostname
I0108 23:06:15.030797   23817 main.go:141] libmachine: (functional-817032) DBG | domain functional-817032 has defined MAC address 52:54:00:53:a0:5b in network mk-functional-817032
I0108 23:06:15.031214   23817 main.go:141] libmachine: (functional-817032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:a0:5b", ip: ""} in network mk-functional-817032: {Iface:virbr1 ExpiryTime:2024-01-09 00:02:56 +0000 UTC Type:0 Mac:52:54:00:53:a0:5b Iaid: IPaddr:192.168.39.26 Prefix:24 Hostname:functional-817032 Clientid:01:52:54:00:53:a0:5b}
I0108 23:06:15.031249   23817 main.go:141] libmachine: (functional-817032) DBG | domain functional-817032 has defined IP address 192.168.39.26 and MAC address 52:54:00:53:a0:5b in network mk-functional-817032
I0108 23:06:15.031388   23817 main.go:141] libmachine: (functional-817032) Calling .GetSSHPort
I0108 23:06:15.031557   23817 main.go:141] libmachine: (functional-817032) Calling .GetSSHKeyPath
I0108 23:06:15.031718   23817 main.go:141] libmachine: (functional-817032) Calling .GetSSHUsername
I0108 23:06:15.031842   23817 sshutil.go:53] new ssh client: &{IP:192.168.39.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/functional-817032/id_rsa Username:docker}
I0108 23:06:15.113938   23817 build_images.go:151] Building image from path: /tmp/build.2109631200.tar
I0108 23:06:15.113999   23817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0108 23:06:15.123182   23817 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2109631200.tar
I0108 23:06:15.127444   23817 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2109631200.tar: stat -c "%s %y" /var/lib/minikube/build/build.2109631200.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2109631200.tar': No such file or directory
I0108 23:06:15.127468   23817 ssh_runner.go:362] scp /tmp/build.2109631200.tar --> /var/lib/minikube/build/build.2109631200.tar (3072 bytes)
I0108 23:06:15.154122   23817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2109631200
I0108 23:06:15.166464   23817 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2109631200 -xf /var/lib/minikube/build/build.2109631200.tar
I0108 23:06:15.174856   23817 containerd.go:378] Building image: /var/lib/minikube/build/build.2109631200
I0108 23:06:15.174917   23817 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2109631200 --local dockerfile=/var/lib/minikube/build/build.2109631200 --output type=image,name=localhost/my-image:functional-817032
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:9dd88a822acd4462a0f322aa35f6df3b4c1307801c37f2f58a7609983a55ad01 0.0s done
#8 exporting config sha256:62550c638eb69ca201599c47de6d80708f470c58918bc6fff2b79f37d23b27a8 0.0s done
#8 naming to localhost/my-image:functional-817032 done
#8 DONE 0.2s
I0108 23:06:20.061604   23817 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2109631200 --local dockerfile=/var/lib/minikube/build/build.2109631200 --output type=image,name=localhost/my-image:functional-817032: (4.88666213s)
I0108 23:06:20.061660   23817 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2109631200
I0108 23:06:20.079420   23817 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2109631200.tar
I0108 23:06:20.095849   23817 build_images.go:207] Built localhost/my-image:functional-817032 from /tmp/build.2109631200.tar
I0108 23:06:20.095887   23817 build_images.go:123] succeeded building to: functional-817032
I0108 23:06:20.095892   23817 build_images.go:124] failed building to: 
I0108 23:06:20.095941   23817 main.go:141] libmachine: Making call to close driver server
I0108 23:06:20.095970   23817 main.go:141] libmachine: (functional-817032) Calling .Close
I0108 23:06:20.096300   23817 main.go:141] libmachine: (functional-817032) DBG | Closing plugin on server side
I0108 23:06:20.096352   23817 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:06:20.096369   23817 main.go:141] libmachine: Making call to close connection to plugin binary
I0108 23:06:20.096387   23817 main.go:141] libmachine: Making call to close driver server
I0108 23:06:20.096398   23817 main.go:141] libmachine: (functional-817032) Calling .Close
I0108 23:06:20.096650   23817 main.go:141] libmachine: Successfully made call to close driver server
I0108 23:06:20.096668   23817 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.682459742s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-817032
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image load --daemon gcr.io/google-containers/addon-resizer:functional-817032 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 image load --daemon gcr.io/google-containers/addon-resizer:functional-817032 --alsologtostderr: (4.311945447s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image load --daemon gcr.io/google-containers/addon-resizer:functional-817032 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 image load --daemon gcr.io/google-containers/addon-resizer:functional-817032 --alsologtostderr: (2.913275021s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.609542455s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-817032
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image load --daemon gcr.io/google-containers/addon-resizer:functional-817032 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 image load --daemon gcr.io/google-containers/addon-resizer:functional-817032 --alsologtostderr: (4.439165814s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image save gcr.io/google-containers/addon-resizer:functional-817032 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 image save gcr.io/google-containers/addon-resizer:functional-817032 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.289669663s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image rm gcr.io/google-containers/addon-resizer:functional-817032 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.970561661s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-817032
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-817032 image save --daemon gcr.io/google-containers/addon-resizer:functional-817032 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-817032 image save --daemon gcr.io/google-containers/addon-resizer:functional-817032 --alsologtostderr: (1.459205433s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-817032
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-817032
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-817032
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-817032
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (138.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-546712 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0108 23:08:37.895427   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-546712 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (2m18.481485877s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (138.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.95s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-546712 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-546712 addons enable ingress --alsologtostderr -v=5: (14.946069908s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.95s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-546712 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-546712 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0108 23:09:05.582527   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-546712 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.478660572s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-546712 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-546712 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fd708486-b7ef-46a1-8f2b-6e1a22cfd4d8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fd708486-b7ef-46a1-8f2b-6e1a22cfd4d8] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.004476164s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-546712 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-546712 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-546712 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.192
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-546712 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-546712 addons disable ingress-dns --alsologtostderr -v=1: (2.461450605s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-546712 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-546712 addons disable ingress --alsologtostderr -v=1: (7.662604911s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.58s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-611622 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0108 23:10:29.226016   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:29.231266   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:29.241499   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:29.261744   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:29.301990   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:29.382406   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:29.542779   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:29.863347   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:30.504232   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:31.784827   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:34.346608   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:39.467306   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:10:49.708258   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:11:10.188833   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-611622 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m40.57882569s)
--- PASS: TestJSONOutput/start/Command (100.58s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-611622 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-611622 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-611622 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-611622 --output=json --user=testUser: (7.100609058s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-812613 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-812613 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.03997ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f09cdf85-8b59-4137-b354-bc9cd6668096","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-812613] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6810681d-a83d-480d-89e7-00b594216f0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17830"}}
	{"specversion":"1.0","id":"e2277bac-598c-4300-bda4-8fd848708c53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6bacc755-512b-4927-92e9-52db21c31ed6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig"}}
	{"specversion":"1.0","id":"c0257622-746f-4ad7-9574-3299c3947d79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube"}}
	{"specversion":"1.0","id":"944f52b8-eabf-488b-b5e9-23d8a896523d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f7d24136-8978-4063-98c4-67c00420215d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"79562d31-31c2-4ce1-8580-7c6836026db2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-812613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-812613
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (99.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-592936 --driver=kvm2  --container-runtime=containerd
E0108 23:11:51.149973   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-592936 --driver=kvm2  --container-runtime=containerd: (48.458016706s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-595021 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-595021 --driver=kvm2  --container-runtime=containerd: (48.643136191s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-592936
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-595021
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-595021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-595021
helpers_test.go:175: Cleaning up "first-592936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-592936
--- PASS: TestMinikubeProfile (99.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-196828 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0108 23:13:13.071683   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-196828 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.660426538s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-196828 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-196828 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-224384 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0108 23:13:37.894889   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:13:59.339355   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:13:59.344601   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:13:59.354866   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:13:59.375104   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:13:59.415300   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:13:59.495524   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:13:59.655958   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:13:59.976591   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:14:00.617532   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:14:01.897846   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:14:04.459591   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-224384 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.729584084s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224384 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224384 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-196828 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224384 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224384 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.12s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-224384
E0108 23:14:09.580340   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-224384: (1.124524204s)
--- PASS: TestMountStart/serial/Stop (1.12s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-224384
E0108 23:14:19.820977   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-224384: (22.839630343s)
--- PASS: TestMountStart/serial/RestartStopped (23.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224384 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-224384 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (181.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-312380 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0108 23:14:40.301816   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:15:21.263363   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:15:29.226258   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:15:56.912748   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:16:43.184910   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-312380 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m1.420944527s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (181.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-312380 -- rollout status deployment/busybox: (4.772356615s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- exec busybox-5bc68d56bd-4ll4l -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- exec busybox-5bc68d56bd-jtx7w -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- exec busybox-5bc68d56bd-4ll4l -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- exec busybox-5bc68d56bd-jtx7w -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- exec busybox-5bc68d56bd-4ll4l -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- exec busybox-5bc68d56bd-jtx7w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- exec busybox-5bc68d56bd-4ll4l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- exec busybox-5bc68d56bd-4ll4l -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- exec busybox-5bc68d56bd-jtx7w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-312380 -- exec busybox-5bc68d56bd-jtx7w -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-312380 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-312380 -v 3 --alsologtostderr: (42.660400166s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.22s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-312380 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp testdata/cp-test.txt multinode-312380:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp multinode-312380:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2250960361/001/cp-test_multinode-312380.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp multinode-312380:/home/docker/cp-test.txt multinode-312380-m02:/home/docker/cp-test_multinode-312380_multinode-312380-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m02 "sudo cat /home/docker/cp-test_multinode-312380_multinode-312380-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp multinode-312380:/home/docker/cp-test.txt multinode-312380-m03:/home/docker/cp-test_multinode-312380_multinode-312380-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m03 "sudo cat /home/docker/cp-test_multinode-312380_multinode-312380-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp testdata/cp-test.txt multinode-312380-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp multinode-312380-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2250960361/001/cp-test_multinode-312380-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp multinode-312380-m02:/home/docker/cp-test.txt multinode-312380:/home/docker/cp-test_multinode-312380-m02_multinode-312380.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380 "sudo cat /home/docker/cp-test_multinode-312380-m02_multinode-312380.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp multinode-312380-m02:/home/docker/cp-test.txt multinode-312380-m03:/home/docker/cp-test_multinode-312380-m02_multinode-312380-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m03 "sudo cat /home/docker/cp-test_multinode-312380-m02_multinode-312380-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp testdata/cp-test.txt multinode-312380-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp multinode-312380-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2250960361/001/cp-test_multinode-312380-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp multinode-312380-m03:/home/docker/cp-test.txt multinode-312380:/home/docker/cp-test_multinode-312380-m03_multinode-312380.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380 "sudo cat /home/docker/cp-test_multinode-312380-m03_multinode-312380.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 cp multinode-312380-m03:/home/docker/cp-test.txt multinode-312380-m02:/home/docker/cp-test_multinode-312380-m03_multinode-312380-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 ssh -n multinode-312380-m02 "sudo cat /home/docker/cp-test_multinode-312380-m03_multinode-312380-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-312380 node stop m03: (1.321356456s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-312380 status: exit status 7 (438.708574ms)

                                                
                                                
-- stdout --
	multinode-312380
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-312380-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-312380-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-312380 status --alsologtostderr: exit status 7 (440.730533ms)

                                                
                                                
-- stdout --
	multinode-312380
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-312380-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-312380-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:18:37.253281   31059 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:18:37.253536   31059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:18:37.253546   31059 out.go:309] Setting ErrFile to fd 2...
	I0108 23:18:37.253551   31059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:18:37.253729   31059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
	I0108 23:18:37.253885   31059 out.go:303] Setting JSON to false
	I0108 23:18:37.253916   31059 mustload.go:65] Loading cluster: multinode-312380
	I0108 23:18:37.254024   31059 notify.go:220] Checking for updates...
	I0108 23:18:37.254271   31059 config.go:182] Loaded profile config "multinode-312380": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 23:18:37.254283   31059 status.go:255] checking status of multinode-312380 ...
	I0108 23:18:37.254711   31059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:18:37.254775   31059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:37.280320   31059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I0108 23:18:37.280685   31059 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:37.281185   31059 main.go:141] libmachine: Using API Version  1
	I0108 23:18:37.281206   31059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:37.281503   31059 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:37.281700   31059 main.go:141] libmachine: (multinode-312380) Calling .GetState
	I0108 23:18:37.283096   31059 status.go:330] multinode-312380 host status = "Running" (err=<nil>)
	I0108 23:18:37.283113   31059 host.go:66] Checking if "multinode-312380" exists ...
	I0108 23:18:37.283358   31059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:18:37.283394   31059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:37.296958   31059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42941
	I0108 23:18:37.297303   31059 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:37.297700   31059 main.go:141] libmachine: Using API Version  1
	I0108 23:18:37.297733   31059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:37.298006   31059 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:37.298167   31059 main.go:141] libmachine: (multinode-312380) Calling .GetIP
	I0108 23:18:37.300472   31059 main.go:141] libmachine: (multinode-312380) DBG | domain multinode-312380 has defined MAC address 52:54:00:e0:1a:f5 in network mk-multinode-312380
	I0108 23:18:37.300813   31059 main.go:141] libmachine: (multinode-312380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:1a:f5", ip: ""} in network mk-multinode-312380: {Iface:virbr1 ExpiryTime:2024-01-09 00:14:50 +0000 UTC Type:0 Mac:52:54:00:e0:1a:f5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-312380 Clientid:01:52:54:00:e0:1a:f5}
	I0108 23:18:37.300850   31059 main.go:141] libmachine: (multinode-312380) DBG | domain multinode-312380 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:1a:f5 in network mk-multinode-312380
	I0108 23:18:37.300964   31059 host.go:66] Checking if "multinode-312380" exists ...
	I0108 23:18:37.301270   31059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:18:37.301305   31059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:37.316214   31059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39411
	I0108 23:18:37.316587   31059 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:37.316972   31059 main.go:141] libmachine: Using API Version  1
	I0108 23:18:37.316991   31059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:37.317306   31059 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:37.317480   31059 main.go:141] libmachine: (multinode-312380) Calling .DriverName
	I0108 23:18:37.317661   31059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:18:37.317684   31059 main.go:141] libmachine: (multinode-312380) Calling .GetSSHHostname
	I0108 23:18:37.320258   31059 main.go:141] libmachine: (multinode-312380) DBG | domain multinode-312380 has defined MAC address 52:54:00:e0:1a:f5 in network mk-multinode-312380
	I0108 23:18:37.320687   31059 main.go:141] libmachine: (multinode-312380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:1a:f5", ip: ""} in network mk-multinode-312380: {Iface:virbr1 ExpiryTime:2024-01-09 00:14:50 +0000 UTC Type:0 Mac:52:54:00:e0:1a:f5 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:multinode-312380 Clientid:01:52:54:00:e0:1a:f5}
	I0108 23:18:37.320714   31059 main.go:141] libmachine: (multinode-312380) DBG | domain multinode-312380 has defined IP address 192.168.39.137 and MAC address 52:54:00:e0:1a:f5 in network mk-multinode-312380
	I0108 23:18:37.320808   31059 main.go:141] libmachine: (multinode-312380) Calling .GetSSHPort
	I0108 23:18:37.320969   31059 main.go:141] libmachine: (multinode-312380) Calling .GetSSHKeyPath
	I0108 23:18:37.321112   31059 main.go:141] libmachine: (multinode-312380) Calling .GetSSHUsername
	I0108 23:18:37.321238   31059 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/multinode-312380/id_rsa Username:docker}
	I0108 23:18:37.411527   31059 ssh_runner.go:195] Run: systemctl --version
	I0108 23:18:37.417012   31059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:18:37.430802   31059 kubeconfig.go:92] found "multinode-312380" server: "https://192.168.39.137:8443"
	I0108 23:18:37.430822   31059 api_server.go:166] Checking apiserver status ...
	I0108 23:18:37.430849   31059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0108 23:18:37.442627   31059 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1105/cgroup
	I0108 23:18:37.452614   31059 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/podbbc6f4cff1756f666f1dcc1e5d2a41f6/1960c2855f6c27ff4ed7eecac36db6e970e2e11a0e68fc72f9eac25a3310d60b"
	I0108 23:18:37.452687   31059 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podbbc6f4cff1756f666f1dcc1e5d2a41f6/1960c2855f6c27ff4ed7eecac36db6e970e2e11a0e68fc72f9eac25a3310d60b/freezer.state
	I0108 23:18:37.462979   31059 api_server.go:204] freezer state: "THAWED"
	I0108 23:18:37.462994   31059 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0108 23:18:37.467470   31059 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I0108 23:18:37.467490   31059 status.go:421] multinode-312380 apiserver status = Running (err=<nil>)
	I0108 23:18:37.467502   31059 status.go:257] multinode-312380 status: &{Name:multinode-312380 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 23:18:37.467530   31059 status.go:255] checking status of multinode-312380-m02 ...
	I0108 23:18:37.467891   31059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:18:37.467928   31059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:37.482300   31059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36205
	I0108 23:18:37.482629   31059 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:37.483040   31059 main.go:141] libmachine: Using API Version  1
	I0108 23:18:37.483054   31059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:37.483351   31059 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:37.483518   31059 main.go:141] libmachine: (multinode-312380-m02) Calling .GetState
	I0108 23:18:37.485025   31059 status.go:330] multinode-312380-m02 host status = "Running" (err=<nil>)
	I0108 23:18:37.485043   31059 host.go:66] Checking if "multinode-312380-m02" exists ...
	I0108 23:18:37.485327   31059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:18:37.485356   31059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:37.498896   31059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41421
	I0108 23:18:37.499202   31059 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:37.499630   31059 main.go:141] libmachine: Using API Version  1
	I0108 23:18:37.499650   31059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:37.499914   31059 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:37.500083   31059 main.go:141] libmachine: (multinode-312380-m02) Calling .GetIP
	I0108 23:18:37.502840   31059 main.go:141] libmachine: (multinode-312380-m02) DBG | domain multinode-312380-m02 has defined MAC address 52:54:00:f8:fe:ff in network mk-multinode-312380
	I0108 23:18:37.503467   31059 main.go:141] libmachine: (multinode-312380-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:fe:ff", ip: ""} in network mk-multinode-312380: {Iface:virbr1 ExpiryTime:2024-01-09 00:15:59 +0000 UTC Type:0 Mac:52:54:00:f8:fe:ff Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:multinode-312380-m02 Clientid:01:52:54:00:f8:fe:ff}
	I0108 23:18:37.503506   31059 main.go:141] libmachine: (multinode-312380-m02) DBG | domain multinode-312380-m02 has defined IP address 192.168.39.138 and MAC address 52:54:00:f8:fe:ff in network mk-multinode-312380
	I0108 23:18:37.503625   31059 host.go:66] Checking if "multinode-312380-m02" exists ...
	I0108 23:18:37.503958   31059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:18:37.504006   31059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:37.517786   31059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43733
	I0108 23:18:37.518110   31059 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:37.518460   31059 main.go:141] libmachine: Using API Version  1
	I0108 23:18:37.518493   31059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:37.518760   31059 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:37.518917   31059 main.go:141] libmachine: (multinode-312380-m02) Calling .DriverName
	I0108 23:18:37.519127   31059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0108 23:18:37.519146   31059 main.go:141] libmachine: (multinode-312380-m02) Calling .GetSSHHostname
	I0108 23:18:37.521622   31059 main.go:141] libmachine: (multinode-312380-m02) DBG | domain multinode-312380-m02 has defined MAC address 52:54:00:f8:fe:ff in network mk-multinode-312380
	I0108 23:18:37.522022   31059 main.go:141] libmachine: (multinode-312380-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:fe:ff", ip: ""} in network mk-multinode-312380: {Iface:virbr1 ExpiryTime:2024-01-09 00:15:59 +0000 UTC Type:0 Mac:52:54:00:f8:fe:ff Iaid: IPaddr:192.168.39.138 Prefix:24 Hostname:multinode-312380-m02 Clientid:01:52:54:00:f8:fe:ff}
	I0108 23:18:37.522050   31059 main.go:141] libmachine: (multinode-312380-m02) DBG | domain multinode-312380-m02 has defined IP address 192.168.39.138 and MAC address 52:54:00:f8:fe:ff in network mk-multinode-312380
	I0108 23:18:37.522192   31059 main.go:141] libmachine: (multinode-312380-m02) Calling .GetSSHPort
	I0108 23:18:37.522345   31059 main.go:141] libmachine: (multinode-312380-m02) Calling .GetSSHKeyPath
	I0108 23:18:37.522474   31059 main.go:141] libmachine: (multinode-312380-m02) Calling .GetSSHUsername
	I0108 23:18:37.522560   31059 sshutil.go:53] new ssh client: &{IP:192.168.39.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17830-8357/.minikube/machines/multinode-312380-m02/id_rsa Username:docker}
	I0108 23:18:37.603611   31059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0108 23:18:37.616430   31059 status.go:257] multinode-312380-m02 status: &{Name:multinode-312380-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0108 23:18:37.616468   31059 status.go:255] checking status of multinode-312380-m03 ...
	I0108 23:18:37.616888   31059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:18:37.616933   31059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:18:37.631682   31059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0108 23:18:37.632074   31059 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:18:37.632528   31059 main.go:141] libmachine: Using API Version  1
	I0108 23:18:37.632547   31059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:18:37.632854   31059 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:18:37.633035   31059 main.go:141] libmachine: (multinode-312380-m03) Calling .GetState
	I0108 23:18:37.634559   31059 status.go:330] multinode-312380-m03 host status = "Stopped" (err=<nil>)
	I0108 23:18:37.634571   31059 status.go:343] host is not running, skipping remaining checks
	I0108 23:18:37.634576   31059 status.go:257] multinode-312380-m03 status: &{Name:multinode-312380-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 node start m03 --alsologtostderr
E0108 23:18:37.894644   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:18:59.339195   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-312380 node start m03 --alsologtostderr: (25.467031977s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (313.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-312380
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-312380
E0108 23:19:27.025189   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:20:00.943434   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:20:29.225550   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-312380: (3m4.936663869s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-312380 --wait=true -v=8 --alsologtostderr
E0108 23:23:37.895282   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:23:59.339068   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-312380 --wait=true -v=8 --alsologtostderr: (2m8.873394096s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-312380
--- PASS: TestMultiNode/serial/RestartKeepsNodes (313.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-312380 node delete m03: (1.162087573s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 stop
E0108 23:25:29.225680   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:26:52.273022   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p multinode-312380 stop: (3m3.255909191s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-312380 status: exit status 7 (90.339951ms)

                                                
                                                
-- stdout --
	multinode-312380
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-312380-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-312380 status --alsologtostderr: exit status 7 (93.372212ms)

                                                
                                                
-- stdout --
	multinode-312380
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-312380-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:27:22.746878   33233 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:27:22.747090   33233 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:27:22.747098   33233 out.go:309] Setting ErrFile to fd 2...
	I0108 23:27:22.747102   33233 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:27:22.747265   33233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
	I0108 23:27:22.747414   33233 out.go:303] Setting JSON to false
	I0108 23:27:22.747452   33233 mustload.go:65] Loading cluster: multinode-312380
	I0108 23:27:22.747549   33233 notify.go:220] Checking for updates...
	I0108 23:27:22.747863   33233 config.go:182] Loaded profile config "multinode-312380": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 23:27:22.747876   33233 status.go:255] checking status of multinode-312380 ...
	I0108 23:27:22.748325   33233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:27:22.748408   33233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:27:22.767157   33233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42917
	I0108 23:27:22.767476   33233 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:27:22.768024   33233 main.go:141] libmachine: Using API Version  1
	I0108 23:27:22.768050   33233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:27:22.768361   33233 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:27:22.768565   33233 main.go:141] libmachine: (multinode-312380) Calling .GetState
	I0108 23:27:22.770199   33233 status.go:330] multinode-312380 host status = "Stopped" (err=<nil>)
	I0108 23:27:22.770214   33233 status.go:343] host is not running, skipping remaining checks
	I0108 23:27:22.770219   33233 status.go:257] multinode-312380 status: &{Name:multinode-312380 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0108 23:27:22.770263   33233 status.go:255] checking status of multinode-312380-m02 ...
	I0108 23:27:22.770567   33233 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0108 23:27:22.770606   33233 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0108 23:27:22.784069   33233 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0108 23:27:22.784401   33233 main.go:141] libmachine: () Calling .GetVersion
	I0108 23:27:22.784857   33233 main.go:141] libmachine: Using API Version  1
	I0108 23:27:22.784881   33233 main.go:141] libmachine: () Calling .SetConfigRaw
	I0108 23:27:22.785200   33233 main.go:141] libmachine: () Calling .GetMachineName
	I0108 23:27:22.785362   33233 main.go:141] libmachine: (multinode-312380-m02) Calling .GetState
	I0108 23:27:22.786854   33233 status.go:330] multinode-312380-m02 host status = "Stopped" (err=<nil>)
	I0108 23:27:22.786867   33233 status.go:343] host is not running, skipping remaining checks
	I0108 23:27:22.786874   33233 status.go:257] multinode-312380-m02 status: &{Name:multinode-312380-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (91.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-312380 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0108 23:28:37.895059   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-312380 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m30.741154678s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-312380 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (91.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (54.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-312380
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-312380-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-312380-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (70.736962ms)

                                                
                                                
-- stdout --
	* [multinode-312380-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-312380-m02' is duplicated with machine name 'multinode-312380-m02' in profile 'multinode-312380'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-312380-m03 --driver=kvm2  --container-runtime=containerd
E0108 23:28:59.339109   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-312380-m03 --driver=kvm2  --container-runtime=containerd: (52.963125723s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-312380
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-312380: exit status 80 (226.666291ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-312380
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-312380-m03 already exists in multinode-312380-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-312380-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (54.29s)

                                                
                                    
x
+
TestPreload (345.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-176986 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0108 23:30:22.385832   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:30:29.225571   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-176986 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m44.783913439s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-176986 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-176986 image pull gcr.io/k8s-minikube/busybox: (3.216073445s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-176986
E0108 23:33:37.894564   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:33:59.339271   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-176986: (1m31.561314824s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-176986 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0108 23:35:29.225764   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-176986 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m24.593760351s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-176986 image list
helpers_test.go:175: Cleaning up "test-preload-176986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-176986
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-176986: (1.052673564s)
--- PASS: TestPreload (345.44s)

                                                
                                    
x
+
TestScheduledStopUnix (120.48s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-494034 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-494034 --memory=2048 --driver=kvm2  --container-runtime=containerd: (48.754458263s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-494034 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-494034 -n scheduled-stop-494034
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-494034 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-494034 --cancel-scheduled
E0108 23:36:40.944129   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-494034 -n scheduled-stop-494034
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-494034
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-494034 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-494034
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-494034: exit status 7 (73.487836ms)

                                                
                                                
-- stdout --
	scheduled-stop-494034
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-494034 -n scheduled-stop-494034
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-494034 -n scheduled-stop-494034: exit status 7 (70.375705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-494034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-494034
--- PASS: TestScheduledStopUnix (120.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (225.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.1205790913.exe start -p running-upgrade-817239 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0108 23:40:29.225866   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.1205790913.exe start -p running-upgrade-817239 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m20.748416647s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-817239 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-817239 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m19.993511105s)
helpers_test.go:175: Cleaning up "running-upgrade-817239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-817239
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-817239: (1.266541882s)
--- PASS: TestRunningBinaryUpgrade (225.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (184.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-081092 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-081092 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m7.883158885s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-081092
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-081092: (2.101683045s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-081092 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-081092 status --format={{.Host}}: exit status 7 (85.173364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-081092 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-081092 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m16.082429791s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-081092 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-081092 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-081092 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (163.608584ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-081092] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-081092
	    minikube start -p kubernetes-upgrade-081092 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0810922 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-081092 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-081092 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0108 23:43:32.273176   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:43:37.894822   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-081092 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (36.511580565s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-081092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-081092
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-081092: (1.502674132s)
--- PASS: TestKubernetesUpgrade (184.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987806 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-987806 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (103.432996ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-987806] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (128.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987806 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987806 --driver=kvm2  --container-runtime=containerd: (2m7.778413019s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-987806 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (128.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987806 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987806 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (16.67803962s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-987806 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-987806 status -o json: exit status 2 (250.145523ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-987806","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-987806
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-987806: (1.083377167s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-040271 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-040271 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (118.614726ms)

                                                
                                                
-- stdout --
	* [false-040271] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0108 23:39:52.142735   38843 out.go:296] Setting OutFile to fd 1 ...
	I0108 23:39:52.142853   38843 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:39:52.142863   38843 out.go:309] Setting ErrFile to fd 2...
	I0108 23:39:52.142870   38843 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0108 23:39:52.143154   38843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17830-8357/.minikube/bin
	I0108 23:39:52.143854   38843 out.go:303] Setting JSON to false
	I0108 23:39:52.145102   38843 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4909,"bootTime":1704752283,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0108 23:39:52.145185   38843 start.go:138] virtualization: kvm guest
	I0108 23:39:52.147798   38843 out.go:177] * [false-040271] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0108 23:39:52.149346   38843 notify.go:220] Checking for updates...
	I0108 23:39:52.150848   38843 out.go:177]   - MINIKUBE_LOCATION=17830
	I0108 23:39:52.152532   38843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0108 23:39:52.154039   38843 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17830-8357/kubeconfig
	I0108 23:39:52.155523   38843 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17830-8357/.minikube
	I0108 23:39:52.156887   38843 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0108 23:39:52.158256   38843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0108 23:39:52.160033   38843 config.go:182] Loaded profile config "NoKubernetes-987806": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0108 23:39:52.160130   38843 config.go:182] Loaded profile config "cert-expiration-092911": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 23:39:52.160209   38843 config.go:182] Loaded profile config "force-systemd-flag-463023": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0108 23:39:52.160283   38843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0108 23:39:52.196221   38843 out.go:177] * Using the kvm2 driver based on user configuration
	I0108 23:39:52.197417   38843 start.go:298] selected driver: kvm2
	I0108 23:39:52.197431   38843 start.go:902] validating driver "kvm2" against <nil>
	I0108 23:39:52.197444   38843 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0108 23:39:52.199359   38843 out.go:177] 
	W0108 23:39:52.200522   38843 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0108 23:39:52.201762   38843 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-040271 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-040271" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-040271" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-8357/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.39:8443
name: NoKubernetes-987806
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-8357/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.206:8443
name: cert-expiration-092911
contexts:
- context:
cluster: NoKubernetes-987806
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-987806
name: NoKubernetes-987806
- context:
cluster: cert-expiration-092911
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-092911
name: cert-expiration-092911
current-context: NoKubernetes-987806
kind: Config
preferences: {}
users:
- name: NoKubernetes-987806
user:
client-certificate: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/NoKubernetes-987806/client.crt
client-key: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/NoKubernetes-987806/client.key
- name: cert-expiration-092911
user:
client-certificate: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/cert-expiration-092911/client.crt
client-key: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/cert-expiration-092911/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-040271

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-040271"

                                                
                                                
----------------------- debugLogs end: false-040271 [took: 3.136340039s] --------------------------------
helpers_test.go:175: Cleaning up "false-040271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-040271
--- PASS: TestNetworkPlugins/group/false (3.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (51.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987806 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987806 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (51.924352251s)
--- PASS: TestNoKubernetes/serial/Start (51.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-987806 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-987806 "sudo systemctl is-active --quiet service kubelet": exit status 1 (219.658406ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.188041621s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-987806
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-987806: (1.378236965s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (46.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-987806 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-987806 --driver=kvm2  --container-runtime=containerd: (46.252818778s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (46.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-987806 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-987806 "sudo systemctl is-active --quiet service kubelet": exit status 1 (225.372197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (165.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.3354110610.exe start -p stopped-upgrade-311350 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.3354110610.exe start -p stopped-upgrade-311350 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m19.799102147s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.3354110610.exe -p stopped-upgrade-311350 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.3354110610.exe -p stopped-upgrade-311350 stop: (2.356135798s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-311350 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-311350 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m23.76386833s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (165.92s)

                                                
                                    
x
+
TestPause/serial/Start (66.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-839182 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-839182 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m6.078879051s)
--- PASS: TestPause/serial/Start (66.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0108 23:43:59.339459   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m43.353716322s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m31.623219576s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-839182 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-839182 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (44.644115991s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-311350
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-311350: (1.023563184s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (122.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m2.168790752s)
--- PASS: TestNetworkPlugins/group/calico/Start (122.17s)

                                                
                                    
x
+
TestPause/serial/Pause (1.19s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-839182 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-839182 --alsologtostderr -v=5: (1.193837168s)
--- PASS: TestPause/serial/Pause (1.19s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-839182 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-839182 --output=json --layout=cluster: exit status 2 (317.282257ms)

                                                
                                                
-- stdout --
	{"Name":"pause-839182","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-839182","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-839182 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.14s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-839182 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-839182 --alsologtostderr -v=5: (1.142772523s)
--- PASS: TestPause/serial/PauseAgain (1.14s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.93s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-839182 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (104.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0108 23:45:29.225911   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m44.990253142s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (104.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-cwh5p" [7c440702-611c-445e-b5cd-6e3cf50fa9e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005855081s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-040271 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-040271 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kt4cs" [5ec0d07d-17cd-4a1b-b153-3c0f255eeff5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kt4cs" [5ec0d07d-17cd-4a1b-b153-3c0f255eeff5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00533477s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-040271 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-040271 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xkx74" [c9037b3a-dc80-47b9-a4c5-ad93aa1ccd42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xkx74" [c9037b3a-dc80-47b9-a4c5-ad93aa1ccd42] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005546106s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-040271 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-040271 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (115.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m55.630402144s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (115.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (126.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m6.007956325s)
--- PASS: TestNetworkPlugins/group/flannel/Start (126.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-x5q2r" [80ee0021-1914-4836-9cdb-011c32fe5676] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005564774s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-040271 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-040271 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z6z4x" [7c96a82a-8ac3-45a6-bd4e-c42a031cf034] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z6z4x" [7c96a82a-8ac3-45a6-bd4e-c42a031cf034] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005244393s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-040271 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-040271 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m6f5c" [0370a20e-498c-4082-a180-19ee8ce744e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-m6f5c" [0370a20e-498c-4082-a180-19ee8ce744e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.006218947s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-040271 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-040271 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (104.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-040271 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m44.518437189s)
--- PASS: TestNetworkPlugins/group/bridge/Start (104.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (156.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-758701 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-758701 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m36.700940992s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (156.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-040271 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-040271 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7lngl" [ec9a7a8e-9e6e-4b46-a5ef-c9722b84edd7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7lngl" [ec9a7a8e-9e6e-4b46-a5ef-c9722b84edd7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.006618467s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-040271 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bqtch" [57fc92b2-c67d-44eb-80f3-85b5457554cc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005676498s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-040271 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-040271 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bzzjz" [dbece8c0-6de1-465d-8074-a6ef16c2cd50] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bzzjz" [dbece8c0-6de1-465d-8074-a6ef16c2cd50] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.005320908s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-040271 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (188.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-305301 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0108 23:48:37.895384   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-305301 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (3m8.017214499s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (188.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-042252 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-042252 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m20.295790386s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-040271 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-040271 replace --force -f testdata/netcat-deployment.yaml
E0108 23:48:59.339186   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vrplw" [056bdb55-c15a-48c9-bc50-5de6413775cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vrplw" [056bdb55-c15a-48c9-bc50-5de6413775cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.007910692s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-040271 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-040271 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0108 23:56:09.788980   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:56:14.987981   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:56:37.771436   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:56:43.330384   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:56:45.357374   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:57:05.456024   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:57:13.041344   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (111.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-506836 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-506836 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m51.066925106s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (111.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-758701 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [98305a77-b1d4-4e79-8899-b8fed3946b15] Pending
helpers_test.go:344: "busybox" [98305a77-b1d4-4e79-8899-b8fed3946b15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [98305a77-b1d4-4e79-8899-b8fed3946b15] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.006059243s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-758701 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-758701 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-758701 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-758701 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-758701 --alsologtostderr -v=3: (1m32.037005663s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-042252 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e1e0a597-0a7b-4d3e-9215-e68f43a8818c] Pending
helpers_test.go:344: "busybox" [e1e0a597-0a7b-4d3e-9215-e68f43a8818c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e1e0a597-0a7b-4d3e-9215-e68f43a8818c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004533639s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-042252 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-042252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-042252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.004978442s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-042252 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-042252 --alsologtostderr -v=3
E0108 23:50:29.226303   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:50:34.681536   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:34.686790   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:34.697028   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:34.717275   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:34.757524   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:34.838512   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:34.998971   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:35.319132   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:35.960126   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:37.240406   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:39.800681   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:42.107620   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:42.112871   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:42.123117   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:42.143347   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:42.183626   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:42.263932   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:42.424205   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:42.744410   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:43.385223   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:44.666379   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:44.921552   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:50:47.227104   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:52.347381   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:50:55.162371   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:51:02.587521   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:51:15.642944   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-042252 --alsologtostderr -v=3: (1m31.716210846s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-506836 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1e5e0ba6-9110-4803-bab3-c012e21bf37a] Pending
helpers_test.go:344: "busybox" [1e5e0ba6-9110-4803-bab3-c012e21bf37a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1e5e0ba6-9110-4803-bab3-c012e21bf37a] Running
E0108 23:51:23.067927   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004689343s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-506836 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-506836 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-506836 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003993066s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-506836 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-506836 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-506836 --alsologtostderr -v=3: (1m32.359379069s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-758701 -n old-k8s-version-758701
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-758701 -n old-k8s-version-758701: exit status 7 (71.706173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-758701 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (110.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-758701 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0108 23:51:37.771178   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:51:37.776438   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:51:37.786721   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:51:37.806964   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:51:37.847265   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:51:37.927943   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:51:38.088765   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:51:38.409895   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:51:39.051036   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:51:40.331783   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:51:42.892305   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-758701 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m50.151118186s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-758701 -n old-k8s-version-758701
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (110.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-305301 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5c2af8d4-2a1e-4cd0-a505-f7fd6e9de22e] Pending
helpers_test.go:344: "busybox" [5c2af8d4-2a1e-4cd0-a505-f7fd6e9de22e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0108 23:51:45.357601   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:51:45.362899   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:51:45.373134   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:51:45.393404   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:51:45.433647   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:51:45.514007   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:51:45.674395   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:51:45.995507   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:51:46.635673   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:51:47.916373   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:51:48.013095   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
helpers_test.go:344: "busybox" [5c2af8d4-2a1e-4cd0-a505-f7fd6e9de22e] Running
E0108 23:51:50.477341   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004224227s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-305301 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-305301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0108 23:51:55.597686   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-305301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.00123639s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-305301 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (102.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-305301 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-305301 --alsologtostderr -v=3: (1m42.171609953s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (102.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042252 -n embed-certs-042252
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042252 -n embed-certs-042252: exit status 7 (98.930086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-042252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (325.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-042252 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0108 23:51:56.603328   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:51:58.254229   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:52:04.028111   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:52:05.838545   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:52:18.734880   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:52:26.319702   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:52:59.695116   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-042252 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m24.921202934s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-042252 -n embed-certs-042252
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (325.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-506836 -n default-k8s-diff-port-506836
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-506836 -n default-k8s-diff-port-506836: exit status 7 (76.468232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-506836 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-506836 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0108 23:53:07.280362   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:53:07.712927   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:07.718172   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:07.728419   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:07.748697   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:07.789062   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:07.869571   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:08.030049   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:08.351058   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:08.991427   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:10.272486   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:12.833093   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:17.953518   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:18.523927   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:53:19.038597   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:19.043859   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:19.054085   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:19.074319   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:19.114549   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:19.194831   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:19.355251   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:19.675846   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:20.316441   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:20.945304   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
E0108 23:53:21.596920   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:24.158066   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:25.948244   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-506836 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m4.704283674s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-506836 -n default-k8s-diff-port-506836
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (31.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0108 23:53:28.193847   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:53:29.278974   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lhvs5" [7a955bec-334c-48db-8e80-4bef53a63c7d] Pending
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lhvs5" [7a955bec-334c-48db-8e80-4bef53a63c7d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0108 23:53:37.894892   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/addons-917645/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lhvs5" [7a955bec-334c-48db-8e80-4bef53a63c7d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 31.004541635s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (31.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-305301 -n no-preload-305301
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-305301 -n no-preload-305301: exit status 7 (90.402892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-305301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (307.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-305301 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0108 23:53:39.519614   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:53:48.674977   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-305301 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m7.536550446s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-305301 -n no-preload-305301
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (307.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-lhvs5" [7a955bec-334c-48db-8e80-4bef53a63c7d] Running
E0108 23:53:59.339541   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:53:59.487694   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:53:59.492967   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:53:59.503219   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:53:59.523492   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:53:59.563970   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:53:59.644300   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:53:59.804575   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:54:00.000715   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:54:00.125028   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:54:00.766107   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:54:02.046651   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005129438s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-758701 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-758701 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-758701 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-758701 -n old-k8s-version-758701
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-758701 -n old-k8s-version-758701: exit status 2 (261.613945ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-758701 -n old-k8s-version-758701
E0108 23:54:04.607480   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-758701 -n old-k8s-version-758701: exit status 2 (279.66725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-758701 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-758701 -n old-k8s-version-758701
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-758701 -n old-k8s-version-758701
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-718428 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0108 23:54:09.727735   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:54:19.967951   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:54:21.615768   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/calico-040271/client.crt: no such file or directory
E0108 23:54:29.200744   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/custom-flannel-040271/client.crt: no such file or directory
E0108 23:54:29.635955   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:54:40.448793   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:54:40.961764   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
E0108 23:54:53.064745   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:54:53.070004   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:54:53.080316   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:54:53.100613   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:54:53.140936   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:54:53.221801   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:54:53.382230   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:54:53.702796   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:54:54.343726   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:54:55.624436   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:54:58.185337   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:55:03.305726   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-718428 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m5.85464732s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-718428 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0108 23:55:13.546806   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-718428 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.225744215s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-718428 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-718428 --alsologtostderr -v=3: (2.118075116s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-718428 -n newest-cni-718428
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-718428 -n newest-cni-718428: exit status 7 (75.832328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-718428 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-718428 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0108 23:55:21.409492   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
E0108 23:55:29.225795   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/functional-817032/client.crt: no such file or directory
E0108 23:55:34.026948   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
E0108 23:55:34.681865   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
E0108 23:55:42.108047   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/auto-040271/client.crt: no such file or directory
E0108 23:55:51.556147   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
E0108 23:56:02.364759   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/kindnet-040271/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-718428 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (45.686248587s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-718428 -n newest-cni-718428
E0108 23:56:02.882093   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (46.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-718428 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-718428 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-718428 -n newest-cni-718428
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-718428 -n newest-cni-718428: exit status 2 (248.862511ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-718428 -n newest-cni-718428
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-718428 -n newest-cni-718428: exit status 2 (250.800976ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-718428 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-718428 -n newest-cni-718428
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-718428 -n newest-cni-718428
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sq8lq" [dc89cfa6-7a27-45fa-960f-b292fcbf7cbb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sq8lq" [dc89cfa6-7a27-45fa-960f-b292fcbf7cbb] Running
E0108 23:57:36.908752   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/old-k8s-version-758701/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.004662992s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sq8lq" [dc89cfa6-7a27-45fa-960f-b292fcbf7cbb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004017167s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-042252 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-042252 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-042252 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-042252 -n embed-certs-042252
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-042252 -n embed-certs-042252: exit status 2 (246.827298ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-042252 -n embed-certs-042252
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-042252 -n embed-certs-042252: exit status 2 (247.374115ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-042252 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-042252 -n embed-certs-042252
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-042252 -n embed-certs-042252
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nv92g" [766df98d-e6ed-434b-a975-dc94ddc49b50] Running
E0108 23:58:07.712590   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/enable-default-cni-040271/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005275028s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nv92g" [766df98d-e6ed-434b-a975-dc94ddc49b50] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004599783s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-506836 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-506836 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-506836 --alsologtostderr -v=1
E0108 23:58:19.038406   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-506836 -n default-k8s-diff-port-506836
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-506836 -n default-k8s-diff-port-506836: exit status 2 (251.383021ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-506836 -n default-k8s-diff-port-506836
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-506836 -n default-k8s-diff-port-506836: exit status 2 (251.917157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-506836 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-506836 -n default-k8s-diff-port-506836
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-506836 -n default-k8s-diff-port-506836
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z69df" [c67a79f6-0ecd-4031-9318-3ffdde3d1bf1] Running
E0108 23:58:46.722795   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/flannel-040271/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.011744237s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z69df" [c67a79f6-0ecd-4031-9318-3ffdde3d1bf1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003585743s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-305301 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-305301 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-305301 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-305301 -n no-preload-305301
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-305301 -n no-preload-305301: exit status 2 (239.295101ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-305301 -n no-preload-305301
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-305301 -n no-preload-305301: exit status 2 (236.690084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-305301 --alsologtostderr -v=1
E0108 23:58:59.339384   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/ingress-addon-legacy-546712/client.crt: no such file or directory
E0108 23:58:59.487065   15598 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/bridge-040271/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-305301 -n no-preload-305301
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-305301 -n no-preload-305301
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.43s)

                                                
                                    

Test skip (39/314)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.2/binaries 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
52 TestDockerFlags 0
55 TestDockerEnvContainerd 0
57 TestHyperKitDriverInstallOrUpdate 0
58 TestHyperkitDriverSkipUpgrade 0
109 TestFunctional/parallel/DockerEnv 0
110 TestFunctional/parallel/PodmanEnv 0
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
158 TestGvisorAddon 0
159 TestImageBuild 0
192 TestKicCustomNetwork 0
193 TestKicExistingNetwork 0
194 TestKicCustomSubnet 0
195 TestKicStaticIP 0
227 TestChangeNoneUser 0
230 TestScheduledStopWindows 0
232 TestSkaffold 0
234 TestInsufficientStorage 0
238 TestMissingContainerUpgrade 0
245 TestNetworkPlugins/group/kubenet 5.92
253 TestNetworkPlugins/group/cilium 3.8
267 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-040271 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-040271" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-040271" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-8357/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.39:8443
name: NoKubernetes-987806
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-8357/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.206:8443
name: cert-expiration-092911
contexts:
- context:
cluster: NoKubernetes-987806
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-987806
name: NoKubernetes-987806
- context:
cluster: cert-expiration-092911
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-092911
name: cert-expiration-092911
current-context: NoKubernetes-987806
kind: Config
preferences: {}
users:
- name: NoKubernetes-987806
user:
client-certificate: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/NoKubernetes-987806/client.crt
client-key: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/NoKubernetes-987806/client.key
- name: cert-expiration-092911
user:
client-certificate: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/cert-expiration-092911/client.crt
client-key: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/cert-expiration-092911/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-040271

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-040271"

                                                
                                                
----------------------- debugLogs end: kubenet-040271 [took: 5.780545243s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-040271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-040271
--- SKIP: TestNetworkPlugins/group/kubenet (5.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-040271 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-040271" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-8357/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.39.39:8443
name: NoKubernetes-987806
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17830-8357/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.206:8443
name: cert-expiration-092911
contexts:
- context:
cluster: NoKubernetes-987806
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: NoKubernetes-987806
name: NoKubernetes-987806
- context:
cluster: cert-expiration-092911
extensions:
- extension:
last-update: Mon, 08 Jan 2024 23:39:13 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-092911
name: cert-expiration-092911
current-context: NoKubernetes-987806
kind: Config
preferences: {}
users:
- name: NoKubernetes-987806
user:
client-certificate: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/NoKubernetes-987806/client.crt
client-key: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/NoKubernetes-987806/client.key
- name: cert-expiration-092911
user:
client-certificate: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/cert-expiration-092911/client.crt
client-key: /home/jenkins/minikube-integration/17830-8357/.minikube/profiles/cert-expiration-092911/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-040271

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-040271" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-040271"

                                                
                                                
----------------------- debugLogs end: cilium-040271 [took: 3.610392698s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-040271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-040271
--- SKIP: TestNetworkPlugins/group/cilium (3.80s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-871159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-871159
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard