Test Report: KVM_Linux_containerd 17233

                    
                      bcf36281e7a7fa8f06366be9b7645ae5ef101314:2023-09-12:30983
                    
                

Test fail (1/302)

Order failed test Duration
26 TestAddons/parallel/InspektorGadget 7.89
x
+
TestAddons/parallel/InspektorGadget (7.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2h7fx" [3b95050f-4321-4ede-9833-d64e065bb7d1] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.024755764s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-436608
addons_test.go:817: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-436608: exit status 11 (507.231911ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-09-12T18:24:54Z" level=error msg="stat /run/containerd/runc/k8s.io/ae2febcc5797b78d92392d044ba5decc18af0a32bfc9ee9c3ec5b54831b0a1b6: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:818: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-436608" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-436608 -n addons-436608
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-436608 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-436608 logs -n 25: (1.465442928s)
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-647738 | jenkins | v1.31.2 | 12 Sep 23 18:21 UTC |                     |
	|         | -p download-only-647738        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-647738 | jenkins | v1.31.2 | 12 Sep 23 18:21 UTC |                     |
	|         | -p download-only-647738        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 12 Sep 23 18:22 UTC | 12 Sep 23 18:22 UTC |
	| delete  | -p download-only-647738        | download-only-647738 | jenkins | v1.31.2 | 12 Sep 23 18:22 UTC | 12 Sep 23 18:22 UTC |
	| delete  | -p download-only-647738        | download-only-647738 | jenkins | v1.31.2 | 12 Sep 23 18:22 UTC | 12 Sep 23 18:22 UTC |
	| start   | --download-only -p             | binary-mirror-547063 | jenkins | v1.31.2 | 12 Sep 23 18:22 UTC |                     |
	|         | binary-mirror-547063           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45659         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-547063        | binary-mirror-547063 | jenkins | v1.31.2 | 12 Sep 23 18:22 UTC | 12 Sep 23 18:22 UTC |
	| start   | -p addons-436608               | addons-436608        | jenkins | v1.31.2 | 12 Sep 23 18:22 UTC | 12 Sep 23 18:24 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-436608        | jenkins | v1.31.2 | 12 Sep 23 18:24 UTC | 12 Sep 23 18:24 UTC |
	|         | -p addons-436608               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-436608 addons           | addons-436608        | jenkins | v1.31.2 | 12 Sep 23 18:24 UTC | 12 Sep 23 18:24 UTC |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-436608        | jenkins | v1.31.2 | 12 Sep 23 18:24 UTC | 12 Sep 23 18:24 UTC |
	|         | addons-436608                  |                      |         |         |                     |                     |
	| addons  | addons-436608 addons disable   | addons-436608        | jenkins | v1.31.2 | 12 Sep 23 18:24 UTC | 12 Sep 23 18:24 UTC |
	|         | helm-tiller --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| ip      | addons-436608 ip               | addons-436608        | jenkins | v1.31.2 | 12 Sep 23 18:24 UTC | 12 Sep 23 18:24 UTC |
	| addons  | addons-436608 addons disable   | addons-436608        | jenkins | v1.31.2 | 12 Sep 23 18:24 UTC | 12 Sep 23 18:24 UTC |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-436608        | jenkins | v1.31.2 | 12 Sep 23 18:24 UTC |                     |
	|         | addons-436608                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 18:22:13
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 18:22:13.058086   11859 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:22:13.058330   11859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:22:13.058339   11859 out.go:309] Setting ErrFile to fd 2...
	I0912 18:22:13.058343   11859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:22:13.058524   11859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
	I0912 18:22:13.059057   11859 out.go:303] Setting JSON to false
	I0912 18:22:13.059799   11859 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":279,"bootTime":1694542654,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:22:13.059855   11859 start.go:138] virtualization: kvm guest
	I0912 18:22:13.062092   11859 out.go:177] * [addons-436608] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 18:22:13.063513   11859 out.go:177]   - MINIKUBE_LOCATION=17233
	I0912 18:22:13.064862   11859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:22:13.063543   11859 notify.go:220] Checking for updates...
	I0912 18:22:13.067511   11859 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	I0912 18:22:13.069058   11859 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	I0912 18:22:13.070448   11859 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 18:22:13.071845   11859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 18:22:13.073161   11859 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 18:22:13.104254   11859 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 18:22:13.105386   11859 start.go:298] selected driver: kvm2
	I0912 18:22:13.105396   11859 start.go:902] validating driver "kvm2" against <nil>
	I0912 18:22:13.105407   11859 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 18:22:13.105983   11859 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:22:13.106062   11859 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17233-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 18:22:13.120093   11859 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0912 18:22:13.120148   11859 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 18:22:13.120476   11859 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 18:22:13.120524   11859 cni.go:84] Creating CNI manager for ""
	I0912 18:22:13.120545   11859 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0912 18:22:13.120565   11859 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 18:22:13.120581   11859 start_flags.go:321] config:
	{Name:addons-436608 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-436608 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:22:13.120747   11859 iso.go:125] acquiring lock: {Name:mk65f8eb47eabe5b76d2b188fdc61eff869c985e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:22:13.122537   11859 out.go:177] * Starting control plane node addons-436608 in cluster addons-436608
	I0912 18:22:13.123876   11859 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0912 18:22:13.123905   11859 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17233-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4
	I0912 18:22:13.123916   11859 cache.go:57] Caching tarball of preloaded images
	I0912 18:22:13.123979   11859 preload.go:174] Found /home/jenkins/minikube-integration/17233-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0912 18:22:13.123988   11859 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on containerd
	I0912 18:22:13.124265   11859 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/config.json ...
	I0912 18:22:13.124286   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/config.json: {Name:mk5abb0dc1140aade3fcd093f2b5e8d80f17f357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:13.124426   11859 start.go:365] acquiring machines lock for addons-436608: {Name:mkfbd07ac372200caa31e5a0ba10a7d7d70e2eb1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0912 18:22:13.124474   11859 start.go:369] acquired machines lock for "addons-436608" in 34.45µs
	I0912 18:22:13.124490   11859 start.go:93] Provisioning new machine with config: &{Name:addons-436608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:addons-436608 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0912 18:22:13.124559   11859 start.go:125] createHost starting for "" (driver="kvm2")
	I0912 18:22:13.126439   11859 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0912 18:22:13.126776   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:22:13.126863   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:22:13.146279   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I0912 18:22:13.146698   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:22:13.147163   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:22:13.147181   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:22:13.147504   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:22:13.147646   11859 main.go:141] libmachine: (addons-436608) Calling .GetMachineName
	I0912 18:22:13.147809   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:22:13.147976   11859 start.go:159] libmachine.API.Create for "addons-436608" (driver="kvm2")
	I0912 18:22:13.147996   11859 client.go:168] LocalClient.Create starting
	I0912 18:22:13.148035   11859 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17233-3774/.minikube/certs/ca.pem
	I0912 18:22:13.307542   11859 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17233-3774/.minikube/certs/cert.pem
	I0912 18:22:13.385959   11859 main.go:141] libmachine: Running pre-create checks...
	I0912 18:22:13.385981   11859 main.go:141] libmachine: (addons-436608) Calling .PreCreateCheck
	I0912 18:22:13.386449   11859 main.go:141] libmachine: (addons-436608) Calling .GetConfigRaw
	I0912 18:22:13.386871   11859 main.go:141] libmachine: Creating machine...
	I0912 18:22:13.386885   11859 main.go:141] libmachine: (addons-436608) Calling .Create
	I0912 18:22:13.387021   11859 main.go:141] libmachine: (addons-436608) Creating KVM machine...
	I0912 18:22:13.388252   11859 main.go:141] libmachine: (addons-436608) DBG | found existing default KVM network
	I0912 18:22:13.388961   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:13.388800   11881 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001478f0}
	I0912 18:22:13.394444   11859 main.go:141] libmachine: (addons-436608) DBG | trying to create private KVM network mk-addons-436608 192.168.39.0/24...
	I0912 18:22:13.460335   11859 main.go:141] libmachine: (addons-436608) DBG | private KVM network mk-addons-436608 192.168.39.0/24 created
	I0912 18:22:13.460380   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:13.460281   11881 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17233-3774/.minikube
	I0912 18:22:13.460397   11859 main.go:141] libmachine: (addons-436608) Setting up store path in /home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608 ...
	I0912 18:22:13.460416   11859 main.go:141] libmachine: (addons-436608) Building disk image from file:///home/jenkins/minikube-integration/17233-3774/.minikube/cache/iso/amd64/minikube-v1.31.0-1694081706-17207-amd64.iso
	I0912 18:22:13.460464   11859 main.go:141] libmachine: (addons-436608) Downloading /home/jenkins/minikube-integration/17233-3774/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17233-3774/.minikube/cache/iso/amd64/minikube-v1.31.0-1694081706-17207-amd64.iso...
	I0912 18:22:13.677773   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:13.677629   11881 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa...
	I0912 18:22:13.815804   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:13.815686   11881 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/addons-436608.rawdisk...
	I0912 18:22:13.815839   11859 main.go:141] libmachine: (addons-436608) DBG | Writing magic tar header
	I0912 18:22:13.815855   11859 main.go:141] libmachine: (addons-436608) DBG | Writing SSH key tar header
	I0912 18:22:13.815869   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:13.815802   11881 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608 ...
	I0912 18:22:13.815963   11859 main.go:141] libmachine: (addons-436608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608
	I0912 18:22:13.815996   11859 main.go:141] libmachine: (addons-436608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17233-3774/.minikube/machines
	I0912 18:22:13.816016   11859 main.go:141] libmachine: (addons-436608) Setting executable bit set on /home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608 (perms=drwx------)
	I0912 18:22:13.816031   11859 main.go:141] libmachine: (addons-436608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17233-3774/.minikube
	I0912 18:22:13.816048   11859 main.go:141] libmachine: (addons-436608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17233-3774
	I0912 18:22:13.816066   11859 main.go:141] libmachine: (addons-436608) Setting executable bit set on /home/jenkins/minikube-integration/17233-3774/.minikube/machines (perms=drwxr-xr-x)
	I0912 18:22:13.816076   11859 main.go:141] libmachine: (addons-436608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0912 18:22:13.816087   11859 main.go:141] libmachine: (addons-436608) DBG | Checking permissions on dir: /home/jenkins
	I0912 18:22:13.816101   11859 main.go:141] libmachine: (addons-436608) DBG | Checking permissions on dir: /home
	I0912 18:22:13.816114   11859 main.go:141] libmachine: (addons-436608) Setting executable bit set on /home/jenkins/minikube-integration/17233-3774/.minikube (perms=drwxr-xr-x)
	I0912 18:22:13.816131   11859 main.go:141] libmachine: (addons-436608) Setting executable bit set on /home/jenkins/minikube-integration/17233-3774 (perms=drwxrwxr-x)
	I0912 18:22:13.816141   11859 main.go:141] libmachine: (addons-436608) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0912 18:22:13.816150   11859 main.go:141] libmachine: (addons-436608) DBG | Skipping /home - not owner
	I0912 18:22:13.816166   11859 main.go:141] libmachine: (addons-436608) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0912 18:22:13.816177   11859 main.go:141] libmachine: (addons-436608) Creating domain...
	I0912 18:22:13.817087   11859 main.go:141] libmachine: (addons-436608) define libvirt domain using xml: 
	I0912 18:22:13.817109   11859 main.go:141] libmachine: (addons-436608) <domain type='kvm'>
	I0912 18:22:13.817117   11859 main.go:141] libmachine: (addons-436608)   <name>addons-436608</name>
	I0912 18:22:13.817123   11859 main.go:141] libmachine: (addons-436608)   <memory unit='MiB'>4000</memory>
	I0912 18:22:13.817153   11859 main.go:141] libmachine: (addons-436608)   <vcpu>2</vcpu>
	I0912 18:22:13.817180   11859 main.go:141] libmachine: (addons-436608)   <features>
	I0912 18:22:13.817194   11859 main.go:141] libmachine: (addons-436608)     <acpi/>
	I0912 18:22:13.817205   11859 main.go:141] libmachine: (addons-436608)     <apic/>
	I0912 18:22:13.817216   11859 main.go:141] libmachine: (addons-436608)     <pae/>
	I0912 18:22:13.817226   11859 main.go:141] libmachine: (addons-436608)     
	I0912 18:22:13.817235   11859 main.go:141] libmachine: (addons-436608)   </features>
	I0912 18:22:13.817246   11859 main.go:141] libmachine: (addons-436608)   <cpu mode='host-passthrough'>
	I0912 18:22:13.817351   11859 main.go:141] libmachine: (addons-436608)   
	I0912 18:22:13.817382   11859 main.go:141] libmachine: (addons-436608)   </cpu>
	I0912 18:22:13.817395   11859 main.go:141] libmachine: (addons-436608)   <os>
	I0912 18:22:13.817414   11859 main.go:141] libmachine: (addons-436608)     <type>hvm</type>
	I0912 18:22:13.817430   11859 main.go:141] libmachine: (addons-436608)     <boot dev='cdrom'/>
	I0912 18:22:13.817443   11859 main.go:141] libmachine: (addons-436608)     <boot dev='hd'/>
	I0912 18:22:13.817457   11859 main.go:141] libmachine: (addons-436608)     <bootmenu enable='no'/>
	I0912 18:22:13.817473   11859 main.go:141] libmachine: (addons-436608)   </os>
	I0912 18:22:13.817486   11859 main.go:141] libmachine: (addons-436608)   <devices>
	I0912 18:22:13.817501   11859 main.go:141] libmachine: (addons-436608)     <disk type='file' device='cdrom'>
	I0912 18:22:13.817521   11859 main.go:141] libmachine: (addons-436608)       <source file='/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/boot2docker.iso'/>
	I0912 18:22:13.817541   11859 main.go:141] libmachine: (addons-436608)       <target dev='hdc' bus='scsi'/>
	I0912 18:22:13.817555   11859 main.go:141] libmachine: (addons-436608)       <readonly/>
	I0912 18:22:13.817572   11859 main.go:141] libmachine: (addons-436608)     </disk>
	I0912 18:22:13.817589   11859 main.go:141] libmachine: (addons-436608)     <disk type='file' device='disk'>
	I0912 18:22:13.817605   11859 main.go:141] libmachine: (addons-436608)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0912 18:22:13.817643   11859 main.go:141] libmachine: (addons-436608)       <source file='/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/addons-436608.rawdisk'/>
	I0912 18:22:13.817659   11859 main.go:141] libmachine: (addons-436608)       <target dev='hda' bus='virtio'/>
	I0912 18:22:13.817669   11859 main.go:141] libmachine: (addons-436608)     </disk>
	I0912 18:22:13.817684   11859 main.go:141] libmachine: (addons-436608)     <interface type='network'>
	I0912 18:22:13.817703   11859 main.go:141] libmachine: (addons-436608)       <source network='mk-addons-436608'/>
	I0912 18:22:13.817718   11859 main.go:141] libmachine: (addons-436608)       <model type='virtio'/>
	I0912 18:22:13.817730   11859 main.go:141] libmachine: (addons-436608)     </interface>
	I0912 18:22:13.817745   11859 main.go:141] libmachine: (addons-436608)     <interface type='network'>
	I0912 18:22:13.817756   11859 main.go:141] libmachine: (addons-436608)       <source network='default'/>
	I0912 18:22:13.817776   11859 main.go:141] libmachine: (addons-436608)       <model type='virtio'/>
	I0912 18:22:13.817794   11859 main.go:141] libmachine: (addons-436608)     </interface>
	I0912 18:22:13.817807   11859 main.go:141] libmachine: (addons-436608)     <serial type='pty'>
	I0912 18:22:13.817815   11859 main.go:141] libmachine: (addons-436608)       <target port='0'/>
	I0912 18:22:13.817825   11859 main.go:141] libmachine: (addons-436608)     </serial>
	I0912 18:22:13.817839   11859 main.go:141] libmachine: (addons-436608)     <console type='pty'>
	I0912 18:22:13.817854   11859 main.go:141] libmachine: (addons-436608)       <target type='serial' port='0'/>
	I0912 18:22:13.817870   11859 main.go:141] libmachine: (addons-436608)     </console>
	I0912 18:22:13.817883   11859 main.go:141] libmachine: (addons-436608)     <rng model='virtio'>
	I0912 18:22:13.817896   11859 main.go:141] libmachine: (addons-436608)       <backend model='random'>/dev/random</backend>
	I0912 18:22:13.817905   11859 main.go:141] libmachine: (addons-436608)     </rng>
	I0912 18:22:13.817916   11859 main.go:141] libmachine: (addons-436608)     
	I0912 18:22:13.817930   11859 main.go:141] libmachine: (addons-436608)     
	I0912 18:22:13.817946   11859 main.go:141] libmachine: (addons-436608)   </devices>
	I0912 18:22:13.817959   11859 main.go:141] libmachine: (addons-436608) </domain>
	I0912 18:22:13.817972   11859 main.go:141] libmachine: (addons-436608) 
	I0912 18:22:13.823106   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:8a:af:1b in network default
	I0912 18:22:13.823577   11859 main.go:141] libmachine: (addons-436608) Ensuring networks are active...
	I0912 18:22:13.823602   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:13.824166   11859 main.go:141] libmachine: (addons-436608) Ensuring network default is active
	I0912 18:22:13.824438   11859 main.go:141] libmachine: (addons-436608) Ensuring network mk-addons-436608 is active
	I0912 18:22:13.824903   11859 main.go:141] libmachine: (addons-436608) Getting domain xml...
	I0912 18:22:13.825467   11859 main.go:141] libmachine: (addons-436608) Creating domain...
	I0912 18:22:15.201433   11859 main.go:141] libmachine: (addons-436608) Waiting to get IP...
	I0912 18:22:15.202105   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:15.202440   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:15.202485   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:15.202431   11881 retry.go:31] will retry after 288.202008ms: waiting for machine to come up
	I0912 18:22:15.491848   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:15.492274   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:15.492302   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:15.492215   11881 retry.go:31] will retry after 267.360197ms: waiting for machine to come up
	I0912 18:22:15.761524   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:15.761889   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:15.761916   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:15.761834   11881 retry.go:31] will retry after 359.944797ms: waiting for machine to come up
	I0912 18:22:16.124549   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:16.124978   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:16.125010   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:16.124943   11881 retry.go:31] will retry after 366.713849ms: waiting for machine to come up
	I0912 18:22:16.493444   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:16.493873   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:16.493900   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:16.493827   11881 retry.go:31] will retry after 729.944562ms: waiting for machine to come up
	I0912 18:22:17.225781   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:17.226247   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:17.226277   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:17.226208   11881 retry.go:31] will retry after 624.307407ms: waiting for machine to come up
	I0912 18:22:17.851622   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:17.851928   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:17.851954   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:17.851885   11881 retry.go:31] will retry after 744.682967ms: waiting for machine to come up
	I0912 18:22:18.598134   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:18.598579   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:18.598610   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:18.598524   11881 retry.go:31] will retry after 1.352233637s: waiting for machine to come up
	I0912 18:22:19.953104   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:19.953552   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:19.953580   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:19.953494   11881 retry.go:31] will retry after 1.117847462s: waiting for machine to come up
	I0912 18:22:21.072597   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:21.072887   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:21.072910   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:21.072840   11881 retry.go:31] will retry after 1.825071781s: waiting for machine to come up
	I0912 18:22:22.900897   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:22.901287   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:22.901319   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:22.901254   11881 retry.go:31] will retry after 1.838874972s: waiting for machine to come up
	I0912 18:22:24.742307   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:24.742749   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:24.742770   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:24.742712   11881 retry.go:31] will retry after 3.04718466s: waiting for machine to come up
	I0912 18:22:27.793801   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:27.794226   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:27.794247   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:27.794173   11881 retry.go:31] will retry after 3.540210002s: waiting for machine to come up
	I0912 18:22:31.338400   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:31.338763   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find current IP address of domain addons-436608 in network mk-addons-436608
	I0912 18:22:31.338792   11859 main.go:141] libmachine: (addons-436608) DBG | I0912 18:22:31.338712   11881 retry.go:31] will retry after 4.355306807s: waiting for machine to come up
	I0912 18:22:35.696839   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:35.697373   11859 main.go:141] libmachine: (addons-436608) Found IP for machine: 192.168.39.50
	I0912 18:22:35.697417   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has current primary IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:35.697428   11859 main.go:141] libmachine: (addons-436608) Reserving static IP address...
	I0912 18:22:35.697760   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find host DHCP lease matching {name: "addons-436608", mac: "52:54:00:1d:00:2a", ip: "192.168.39.50"} in network mk-addons-436608
	I0912 18:22:35.764965   11859 main.go:141] libmachine: (addons-436608) DBG | Getting to WaitForSSH function...
	I0912 18:22:35.764999   11859 main.go:141] libmachine: (addons-436608) Reserved static IP address: 192.168.39.50
	I0912 18:22:35.765013   11859 main.go:141] libmachine: (addons-436608) Waiting for SSH to be available...
	I0912 18:22:35.767481   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:35.767819   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608
	I0912 18:22:35.767845   11859 main.go:141] libmachine: (addons-436608) DBG | unable to find defined IP address of network mk-addons-436608 interface with MAC address 52:54:00:1d:00:2a
	I0912 18:22:35.768005   11859 main.go:141] libmachine: (addons-436608) DBG | Using SSH client type: external
	I0912 18:22:35.768041   11859 main.go:141] libmachine: (addons-436608) DBG | Using SSH private key: /home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa (-rw-------)
	I0912 18:22:35.768146   11859 main.go:141] libmachine: (addons-436608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 18:22:35.768163   11859 main.go:141] libmachine: (addons-436608) DBG | About to run SSH command:
	I0912 18:22:35.768176   11859 main.go:141] libmachine: (addons-436608) DBG | exit 0
	I0912 18:22:35.779295   11859 main.go:141] libmachine: (addons-436608) DBG | SSH cmd err, output: exit status 255: 
	I0912 18:22:35.779317   11859 main.go:141] libmachine: (addons-436608) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0912 18:22:35.779325   11859 main.go:141] libmachine: (addons-436608) DBG | command : exit 0
	I0912 18:22:35.779332   11859 main.go:141] libmachine: (addons-436608) DBG | err     : exit status 255
	I0912 18:22:35.779341   11859 main.go:141] libmachine: (addons-436608) DBG | output  : 
	I0912 18:22:38.780001   11859 main.go:141] libmachine: (addons-436608) DBG | Getting to WaitForSSH function...
	I0912 18:22:38.782297   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:38.782693   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:38.782720   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:38.782869   11859 main.go:141] libmachine: (addons-436608) DBG | Using SSH client type: external
	I0912 18:22:38.782892   11859 main.go:141] libmachine: (addons-436608) DBG | Using SSH private key: /home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa (-rw-------)
	I0912 18:22:38.782914   11859 main.go:141] libmachine: (addons-436608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0912 18:22:38.782930   11859 main.go:141] libmachine: (addons-436608) DBG | About to run SSH command:
	I0912 18:22:38.782945   11859 main.go:141] libmachine: (addons-436608) DBG | exit 0
	I0912 18:22:38.871834   11859 main.go:141] libmachine: (addons-436608) DBG | SSH cmd err, output: <nil>: 
	I0912 18:22:38.872110   11859 main.go:141] libmachine: (addons-436608) KVM machine creation complete!
	I0912 18:22:38.872405   11859 main.go:141] libmachine: (addons-436608) Calling .GetConfigRaw
	I0912 18:22:38.872876   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:22:38.873057   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:22:38.873205   11859 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0912 18:22:38.873222   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:22:38.874398   11859 main.go:141] libmachine: Detecting operating system of created instance...
	I0912 18:22:38.874417   11859 main.go:141] libmachine: Waiting for SSH to be available...
	I0912 18:22:38.874426   11859 main.go:141] libmachine: Getting to WaitForSSH function...
	I0912 18:22:38.874436   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:22:38.876252   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:38.876583   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:38.876624   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:38.876706   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:22:38.876877   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:38.877030   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:38.877206   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:22:38.877367   11859 main.go:141] libmachine: Using SSH client type: native
	I0912 18:22:38.877712   11859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0912 18:22:38.877727   11859 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0912 18:22:38.995294   11859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 18:22:38.995323   11859 main.go:141] libmachine: Detecting the provisioner...
	I0912 18:22:38.995332   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:22:38.998021   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:38.998302   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:38.998330   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:38.998487   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:22:38.998655   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:38.998785   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:38.998907   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:22:38.999073   11859 main.go:141] libmachine: Using SSH client type: native
	I0912 18:22:38.999520   11859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0912 18:22:38.999539   11859 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0912 18:22:39.116542   11859 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gaa74cea-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0912 18:22:39.116629   11859 main.go:141] libmachine: found compatible host: buildroot
	I0912 18:22:39.116640   11859 main.go:141] libmachine: Provisioning with buildroot...
	I0912 18:22:39.116648   11859 main.go:141] libmachine: (addons-436608) Calling .GetMachineName
	I0912 18:22:39.116905   11859 buildroot.go:166] provisioning hostname "addons-436608"
	I0912 18:22:39.116930   11859 main.go:141] libmachine: (addons-436608) Calling .GetMachineName
	I0912 18:22:39.117106   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:22:39.119191   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.119464   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:39.119483   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.119620   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:22:39.119774   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:39.119918   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:39.120067   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:22:39.120257   11859 main.go:141] libmachine: Using SSH client type: native
	I0912 18:22:39.120567   11859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0912 18:22:39.120581   11859 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-436608 && echo "addons-436608" | sudo tee /etc/hostname
	I0912 18:22:39.251877   11859 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-436608
	
	I0912 18:22:39.251914   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:22:39.254461   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.254732   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:39.254763   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.254919   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:22:39.255097   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:39.255286   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:39.255433   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:22:39.255606   11859 main.go:141] libmachine: Using SSH client type: native
	I0912 18:22:39.255912   11859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0912 18:22:39.255929   11859 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-436608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-436608/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-436608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 18:22:39.382689   11859 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 18:22:39.382716   11859 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17233-3774/.minikube CaCertPath:/home/jenkins/minikube-integration/17233-3774/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17233-3774/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17233-3774/.minikube}
	I0912 18:22:39.382759   11859 buildroot.go:174] setting up certificates
	I0912 18:22:39.382769   11859 provision.go:83] configureAuth start
	I0912 18:22:39.382780   11859 main.go:141] libmachine: (addons-436608) Calling .GetMachineName
	I0912 18:22:39.383045   11859 main.go:141] libmachine: (addons-436608) Calling .GetIP
	I0912 18:22:39.385462   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.385778   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:39.385808   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.385916   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:22:39.387851   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.388156   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:39.388180   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.388300   11859 provision.go:138] copyHostCerts
	I0912 18:22:39.388363   11859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3774/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17233-3774/.minikube/ca.pem (1082 bytes)
	I0912 18:22:39.388512   11859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3774/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17233-3774/.minikube/cert.pem (1123 bytes)
	I0912 18:22:39.388572   11859 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17233-3774/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17233-3774/.minikube/key.pem (1679 bytes)
	I0912 18:22:39.388630   11859 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17233-3774/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17233-3774/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17233-3774/.minikube/certs/ca-key.pem org=jenkins.addons-436608 san=[192.168.39.50 192.168.39.50 localhost 127.0.0.1 minikube addons-436608]
	I0912 18:22:39.629875   11859 provision.go:172] copyRemoteCerts
	I0912 18:22:39.629927   11859 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 18:22:39.629947   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:22:39.632458   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.632856   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:39.632889   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.633018   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:22:39.633198   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:39.633344   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:22:39.633487   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:22:39.721593   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 18:22:39.741751   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0912 18:22:39.761162   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 18:22:39.780410   11859 provision.go:86] duration metric: configureAuth took 397.625614ms
	I0912 18:22:39.780437   11859 buildroot.go:189] setting minikube options for container-runtime
	I0912 18:22:39.780636   11859 config.go:182] Loaded profile config "addons-436608": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0912 18:22:39.780658   11859 main.go:141] libmachine: Checking connection to Docker...
	I0912 18:22:39.780668   11859 main.go:141] libmachine: (addons-436608) Calling .GetURL
	I0912 18:22:39.781907   11859 main.go:141] libmachine: (addons-436608) DBG | Using libvirt version 6000000
	I0912 18:22:39.784149   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.784471   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:39.784501   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.784710   11859 main.go:141] libmachine: Docker is up and running!
	I0912 18:22:39.784729   11859 main.go:141] libmachine: Reticulating splines...
	I0912 18:22:39.784738   11859 client.go:171] LocalClient.Create took 26.636734087s
	I0912 18:22:39.784763   11859 start.go:167] duration metric: libmachine.API.Create for "addons-436608" took 26.636787088s
	I0912 18:22:39.784772   11859 start.go:300] post-start starting for "addons-436608" (driver="kvm2")
	I0912 18:22:39.784781   11859 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 18:22:39.784797   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:22:39.785010   11859 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 18:22:39.785028   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:22:39.787267   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.787585   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:39.787618   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.787733   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:22:39.787914   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:39.788089   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:22:39.788286   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:22:39.877987   11859 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 18:22:39.881843   11859 info.go:137] Remote host: Buildroot 2021.02.12
	I0912 18:22:39.881859   11859 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3774/.minikube/addons for local assets ...
	I0912 18:22:39.881926   11859 filesync.go:126] Scanning /home/jenkins/minikube-integration/17233-3774/.minikube/files for local assets ...
	I0912 18:22:39.881960   11859 start.go:303] post-start completed in 97.181798ms
	I0912 18:22:39.881995   11859 main.go:141] libmachine: (addons-436608) Calling .GetConfigRaw
	I0912 18:22:39.882502   11859 main.go:141] libmachine: (addons-436608) Calling .GetIP
	I0912 18:22:39.884904   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.885289   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:39.885328   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.885448   11859 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/config.json ...
	I0912 18:22:39.885598   11859 start.go:128] duration metric: createHost completed in 26.761031645s
	I0912 18:22:39.885618   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:22:39.887637   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.887906   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:39.887945   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:39.888086   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:22:39.888249   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:39.888388   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:39.888508   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:22:39.888676   11859 main.go:141] libmachine: Using SSH client type: native
	I0912 18:22:39.888969   11859 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0912 18:22:39.888980   11859 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0912 18:22:40.008611   11859 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694542959.985577267
	
	I0912 18:22:40.008641   11859 fix.go:206] guest clock: 1694542959.985577267
	I0912 18:22:40.008652   11859 fix.go:219] Guest: 2023-09-12 18:22:39.985577267 +0000 UTC Remote: 2023-09-12 18:22:39.885608532 +0000 UTC m=+26.857420738 (delta=99.968735ms)
	I0912 18:22:40.008678   11859 fix.go:190] guest clock delta is within tolerance: 99.968735ms
	I0912 18:22:40.008684   11859 start.go:83] releasing machines lock for "addons-436608", held for 26.884201287s
	I0912 18:22:40.008712   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:22:40.008966   11859 main.go:141] libmachine: (addons-436608) Calling .GetIP
	I0912 18:22:40.011294   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:40.011583   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:40.011613   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:40.011822   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:22:40.012295   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:22:40.012476   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:22:40.012553   11859 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 18:22:40.012595   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:22:40.012708   11859 ssh_runner.go:195] Run: cat /version.json
	I0912 18:22:40.012737   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:22:40.015025   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:40.015153   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:40.015374   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:40.015401   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:40.015456   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:40.015482   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:40.015493   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:22:40.015632   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:22:40.015732   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:40.015806   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:22:40.015892   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:22:40.015963   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:22:40.016017   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:22:40.016066   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:22:40.138883   11859 ssh_runner.go:195] Run: systemctl --version
	I0912 18:22:40.144314   11859 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0912 18:22:40.149224   11859 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0912 18:22:40.149295   11859 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 18:22:40.163962   11859 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 18:22:40.163984   11859 start.go:469] detecting cgroup driver to use...
	I0912 18:22:40.164047   11859 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0912 18:22:40.202503   11859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 18:22:40.213813   11859 docker.go:196] disabling cri-docker service (if available) ...
	I0912 18:22:40.213868   11859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 18:22:40.224746   11859 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 18:22:40.235379   11859 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 18:22:40.328857   11859 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 18:22:40.438140   11859 docker.go:212] disabling docker service ...
	I0912 18:22:40.438224   11859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 18:22:40.450020   11859 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 18:22:40.460378   11859 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 18:22:40.569741   11859 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 18:22:40.680176   11859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 18:22:40.692498   11859 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 18:22:40.707968   11859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0912 18:22:40.717272   11859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 18:22:40.726179   11859 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 18:22:40.726231   11859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 18:22:40.735562   11859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:22:40.744571   11859 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 18:22:40.753169   11859 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 18:22:40.761776   11859 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 18:22:40.770748   11859 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 18:22:40.779524   11859 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 18:22:40.787384   11859 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0912 18:22:40.787434   11859 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0912 18:22:40.798534   11859 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 18:22:40.806758   11859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:22:40.916426   11859 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 18:22:40.948108   11859 start.go:516] Will wait 60s for socket path /run/containerd/containerd.sock
	I0912 18:22:40.948189   11859 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0912 18:22:40.952637   11859 retry.go:31] will retry after 555.523588ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0912 18:22:41.508365   11859 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0912 18:22:41.513406   11859 start.go:537] Will wait 60s for crictl version
	I0912 18:22:41.513511   11859 ssh_runner.go:195] Run: which crictl
	I0912 18:22:41.516929   11859 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 18:22:41.544428   11859 start.go:553] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.3
	RuntimeApiVersion:  v1alpha2
	I0912 18:22:41.544510   11859 ssh_runner.go:195] Run: containerd --version
	I0912 18:22:41.571397   11859 ssh_runner.go:195] Run: containerd --version
	I0912 18:22:41.598747   11859 out.go:177] * Preparing Kubernetes v1.28.1 on containerd 1.7.3 ...
	I0912 18:22:41.600108   11859 main.go:141] libmachine: (addons-436608) Calling .GetIP
	I0912 18:22:41.602516   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:41.602818   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:22:41.602866   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:22:41.603013   11859 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0912 18:22:41.606661   11859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 18:22:41.617810   11859 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0912 18:22:41.617872   11859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 18:22:41.643386   11859 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0912 18:22:41.643447   11859 ssh_runner.go:195] Run: which lz4
	I0912 18:22:41.646842   11859 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0912 18:22:41.650457   11859 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 18:22:41.650484   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (456573247 bytes)
	I0912 18:22:43.264826   11859 containerd.go:547] Took 1.618003 seconds to copy over tarball
	I0912 18:22:43.264880   11859 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 18:22:45.940728   11859 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.675810397s)
	I0912 18:22:45.940755   11859 containerd.go:554] Took 2.675909 seconds to extract the tarball
	I0912 18:22:45.940766   11859 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 18:22:45.980664   11859 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 18:22:46.081309   11859 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 18:22:46.112423   11859 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 18:22:47.160503   11859 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.048045362s)
	I0912 18:22:47.160621   11859 containerd.go:604] all images are preloaded for containerd runtime.
	I0912 18:22:47.160635   11859 cache_images.go:84] Images are preloaded, skipping loading
	I0912 18:22:47.160687   11859 ssh_runner.go:195] Run: sudo crictl info
	I0912 18:22:47.188870   11859 cni.go:84] Creating CNI manager for ""
	I0912 18:22:47.188900   11859 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0912 18:22:47.188922   11859 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 18:22:47.188948   11859 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.50 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-436608 NodeName:addons-436608 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 18:22:47.189092   11859 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-436608"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 18:22:47.189179   11859 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-436608 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-436608 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 18:22:47.189238   11859 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 18:22:47.197768   11859 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 18:22:47.197824   11859 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 18:22:47.205649   11859 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0912 18:22:47.220192   11859 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 18:22:47.234528   11859 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0912 18:22:47.249038   11859 ssh_runner.go:195] Run: grep 192.168.39.50	control-plane.minikube.internal$ /etc/hosts
	I0912 18:22:47.252236   11859 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.50	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 18:22:47.263398   11859 certs.go:56] Setting up /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608 for IP: 192.168.39.50
	I0912 18:22:47.263426   11859 certs.go:190] acquiring lock for shared ca certs: {Name:mkfa1724b42b3ff58ba5d05289a853cf5c0f549c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:47.263581   11859 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17233-3774/.minikube/ca.key
	I0912 18:22:47.364415   11859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17233-3774/.minikube/ca.crt ...
	I0912 18:22:47.364441   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/ca.crt: {Name:mk1792b023c49595e318bcc3907b8c34fe0ce0dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:47.364607   11859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17233-3774/.minikube/ca.key ...
	I0912 18:22:47.364620   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/ca.key: {Name:mk6a9581a687510a2af59add449a101f58ce9cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:47.364714   11859 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17233-3774/.minikube/proxy-client-ca.key
	I0912 18:22:47.696309   11859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17233-3774/.minikube/proxy-client-ca.crt ...
	I0912 18:22:47.696339   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/proxy-client-ca.crt: {Name:mkfa110a52ed1fe4f513d1e206225f02272263ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:47.696524   11859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17233-3774/.minikube/proxy-client-ca.key ...
	I0912 18:22:47.696539   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/proxy-client-ca.key: {Name:mke39b9cd9ace954efcddc805c31fc72fbfca987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:47.696711   11859 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.key
	I0912 18:22:47.696737   11859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt with IP's: []
	I0912 18:22:48.158524   11859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt ...
	I0912 18:22:48.162911   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: {Name:mk948ff477422dcafd25398db0e285f607764c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:48.163105   11859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.key ...
	I0912 18:22:48.163119   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.key: {Name:mk5f8c352002bb6c64d3a47e491f4dbe115711a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:48.163208   11859 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.key.59dcb911
	I0912 18:22:48.163230   11859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.crt.59dcb911 with IP's: [192.168.39.50 10.96.0.1 127.0.0.1 10.0.0.1]
	I0912 18:22:48.296096   11859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.crt.59dcb911 ...
	I0912 18:22:48.296122   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.crt.59dcb911: {Name:mk680e5636208ce1fe74e19b52b6fce798b5677f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:48.296292   11859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.key.59dcb911 ...
	I0912 18:22:48.296305   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.key.59dcb911: {Name:mkb0df40f9f5880afaa56c48d17f88d6e0bb441a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:48.296413   11859 certs.go:337] copying /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.crt.59dcb911 -> /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.crt
	I0912 18:22:48.296484   11859 certs.go:341] copying /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.key.59dcb911 -> /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.key
	I0912 18:22:48.296530   11859 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/proxy-client.key
	I0912 18:22:48.296547   11859 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/proxy-client.crt with IP's: []
	I0912 18:22:48.452713   11859 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/proxy-client.crt ...
	I0912 18:22:48.452742   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/proxy-client.crt: {Name:mk885c7896b93ed22503fa0b405b52ce6fc32a61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:48.452953   11859 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/proxy-client.key ...
	I0912 18:22:48.452967   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/proxy-client.key: {Name:mkf8ceb998d829b3b7ff21a4bcea2472926ad672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:22:48.453153   11859 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3774/.minikube/certs/home/jenkins/minikube-integration/17233-3774/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 18:22:48.453189   11859 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3774/.minikube/certs/home/jenkins/minikube-integration/17233-3774/.minikube/certs/ca.pem (1082 bytes)
	I0912 18:22:48.453216   11859 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3774/.minikube/certs/home/jenkins/minikube-integration/17233-3774/.minikube/certs/cert.pem (1123 bytes)
	I0912 18:22:48.453239   11859 certs.go:437] found cert: /home/jenkins/minikube-integration/17233-3774/.minikube/certs/home/jenkins/minikube-integration/17233-3774/.minikube/certs/key.pem (1679 bytes)
	I0912 18:22:48.453737   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 18:22:48.475733   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 18:22:48.496798   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 18:22:48.517651   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 18:22:48.538878   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 18:22:48.559605   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 18:22:48.580280   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 18:22:48.604886   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0912 18:22:48.630441   11859 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17233-3774/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 18:22:48.655770   11859 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 18:22:48.671877   11859 ssh_runner.go:195] Run: openssl version
	I0912 18:22:48.677563   11859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 18:22:48.686329   11859 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:22:48.690403   11859 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 18:22 /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:22:48.690443   11859 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 18:22:48.695478   11859 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 18:22:48.704668   11859 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 18:22:48.708498   11859 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 18:22:48.708549   11859 kubeadm.go:404] StartCluster: {Name:addons-436608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:addons-436608 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:22:48.708648   11859 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0912 18:22:48.708696   11859 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 18:22:48.744441   11859 cri.go:89] found id: ""
	I0912 18:22:48.744504   11859 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 18:22:48.753020   11859 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 18:22:48.761164   11859 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 18:22:48.769141   11859 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 18:22:48.769175   11859 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0912 18:22:48.944504   11859 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 18:23:00.026227   11859 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0912 18:23:00.026302   11859 kubeadm.go:322] [preflight] Running pre-flight checks
	I0912 18:23:00.026402   11859 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 18:23:00.026521   11859 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 18:23:00.026645   11859 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 18:23:00.026740   11859 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 18:23:00.028063   11859 out.go:204]   - Generating certificates and keys ...
	I0912 18:23:00.028155   11859 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0912 18:23:00.028242   11859 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0912 18:23:00.028344   11859 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 18:23:00.028433   11859 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0912 18:23:00.028517   11859 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0912 18:23:00.028590   11859 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0912 18:23:00.028664   11859 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0912 18:23:00.028825   11859 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-436608 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0912 18:23:00.028904   11859 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0912 18:23:00.029081   11859 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-436608 localhost] and IPs [192.168.39.50 127.0.0.1 ::1]
	I0912 18:23:00.029255   11859 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 18:23:00.029358   11859 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 18:23:00.029418   11859 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0912 18:23:00.029510   11859 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 18:23:00.029570   11859 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 18:23:00.029620   11859 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 18:23:00.029673   11859 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 18:23:00.029750   11859 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 18:23:00.029869   11859 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 18:23:00.029952   11859 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 18:23:00.031333   11859 out.go:204]   - Booting up control plane ...
	I0912 18:23:00.031464   11859 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 18:23:00.031556   11859 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 18:23:00.031633   11859 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 18:23:00.031763   11859 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 18:23:00.031877   11859 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 18:23:00.031930   11859 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0912 18:23:00.032127   11859 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 18:23:00.032228   11859 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.001796 seconds
	I0912 18:23:00.032359   11859 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 18:23:00.032557   11859 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 18:23:00.032651   11859 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 18:23:00.032863   11859 kubeadm.go:322] [mark-control-plane] Marking the node addons-436608 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 18:23:00.032943   11859 kubeadm.go:322] [bootstrap-token] Using token: 2gjrkm.8h7e3817utn605o1
	I0912 18:23:00.034230   11859 out.go:204]   - Configuring RBAC rules ...
	I0912 18:23:00.034360   11859 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 18:23:00.034463   11859 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 18:23:00.034635   11859 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 18:23:00.034794   11859 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 18:23:00.034946   11859 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 18:23:00.035057   11859 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 18:23:00.035221   11859 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 18:23:00.035285   11859 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0912 18:23:00.035342   11859 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0912 18:23:00.035355   11859 kubeadm.go:322] 
	I0912 18:23:00.035432   11859 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0912 18:23:00.035441   11859 kubeadm.go:322] 
	I0912 18:23:00.035550   11859 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0912 18:23:00.035562   11859 kubeadm.go:322] 
	I0912 18:23:00.035602   11859 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0912 18:23:00.035686   11859 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 18:23:00.035761   11859 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 18:23:00.035772   11859 kubeadm.go:322] 
	I0912 18:23:00.035859   11859 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0912 18:23:00.035875   11859 kubeadm.go:322] 
	I0912 18:23:00.035953   11859 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 18:23:00.035964   11859 kubeadm.go:322] 
	I0912 18:23:00.036030   11859 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0912 18:23:00.036122   11859 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 18:23:00.036222   11859 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 18:23:00.036243   11859 kubeadm.go:322] 
	I0912 18:23:00.036361   11859 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 18:23:00.036490   11859 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0912 18:23:00.036503   11859 kubeadm.go:322] 
	I0912 18:23:00.036635   11859 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2gjrkm.8h7e3817utn605o1 \
	I0912 18:23:00.036779   11859 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0bf0efc2ba46067027f014cbebca54f39b406b55e16aee627ca676f30fa6147a \
	I0912 18:23:00.036822   11859 kubeadm.go:322] 	--control-plane 
	I0912 18:23:00.036833   11859 kubeadm.go:322] 
	I0912 18:23:00.036944   11859 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0912 18:23:00.036953   11859 kubeadm.go:322] 
	I0912 18:23:00.037053   11859 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2gjrkm.8h7e3817utn605o1 \
	I0912 18:23:00.037152   11859 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0bf0efc2ba46067027f014cbebca54f39b406b55e16aee627ca676f30fa6147a 
	I0912 18:23:00.037164   11859 cni.go:84] Creating CNI manager for ""
	I0912 18:23:00.037173   11859 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0912 18:23:00.038612   11859 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 18:23:00.039827   11859 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 18:23:00.052828   11859 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0912 18:23:00.078203   11859 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 18:23:00.078261   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:00.078294   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=7fcf473f700c1ee60c8afd1005162a3d3f02aa75 minikube.k8s.io/name=addons-436608 minikube.k8s.io/updated_at=2023_09_12T18_23_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:00.141125   11859 ops.go:34] apiserver oom_adj: -16
	I0912 18:23:00.353851   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:00.439735   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:01.038361   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:01.538378   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:02.038563   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:02.538627   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:03.038573   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:03.538460   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:04.038350   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:04.538290   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:05.037735   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:05.538457   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:06.038680   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:06.538480   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:07.038437   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:07.538699   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:08.037930   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:08.537964   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:09.038443   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:09.538377   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:10.037763   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:10.537852   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:11.038180   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:11.538114   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:12.038148   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:12.538794   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:13.037813   11859 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 18:23:13.195746   11859 kubeadm.go:1081] duration metric: took 13.117525635s to wait for elevateKubeSystemPrivileges.
	I0912 18:23:13.195774   11859 kubeadm.go:406] StartCluster complete in 24.48723019s
	I0912 18:23:13.195791   11859 settings.go:142] acquiring lock: {Name:mk5e29c73b947d8c19e4c25fcd5f76d3bad777e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:23:13.195924   11859 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17233-3774/kubeconfig
	I0912 18:23:13.196255   11859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/kubeconfig: {Name:mk0652765cd4d7c44e927d28591cad7403172b2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:23:13.196453   11859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 18:23:13.196476   11859 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0912 18:23:13.196572   11859 addons.go:69] Setting volumesnapshots=true in profile "addons-436608"
	I0912 18:23:13.196584   11859 addons.go:69] Setting cloud-spanner=true in profile "addons-436608"
	I0912 18:23:13.196611   11859 addons.go:69] Setting inspektor-gadget=true in profile "addons-436608"
	I0912 18:23:13.196628   11859 addons.go:69] Setting storage-provisioner=true in profile "addons-436608"
	I0912 18:23:13.196631   11859 addons.go:231] Setting addon cloud-spanner=true in "addons-436608"
	I0912 18:23:13.196638   11859 addons.go:231] Setting addon inspektor-gadget=true in "addons-436608"
	I0912 18:23:13.196641   11859 addons.go:231] Setting addon storage-provisioner=true in "addons-436608"
	I0912 18:23:13.196645   11859 addons.go:69] Setting ingress=true in profile "addons-436608"
	I0912 18:23:13.196646   11859 addons.go:69] Setting default-storageclass=true in profile "addons-436608"
	I0912 18:23:13.196670   11859 addons.go:231] Setting addon ingress=true in "addons-436608"
	I0912 18:23:13.196677   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.196678   11859 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-436608"
	I0912 18:23:13.196681   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.196688   11859 addons.go:69] Setting ingress-dns=true in profile "addons-436608"
	I0912 18:23:13.196689   11859 config.go:182] Loaded profile config "addons-436608": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0912 18:23:13.196697   11859 addons.go:231] Setting addon ingress-dns=true in "addons-436608"
	I0912 18:23:13.196725   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.196728   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.196731   11859 addons.go:69] Setting helm-tiller=true in profile "addons-436608"
	I0912 18:23:13.196742   11859 addons.go:231] Setting addon helm-tiller=true in "addons-436608"
	I0912 18:23:13.196770   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.197130   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.197149   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.197150   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.197159   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.197164   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.197164   11859 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-436608"
	I0912 18:23:13.196618   11859 addons.go:69] Setting metrics-server=true in profile "addons-436608"
	I0912 18:23:13.197186   11859 addons.go:231] Setting addon metrics-server=true in "addons-436608"
	I0912 18:23:13.196611   11859 addons.go:231] Setting addon volumesnapshots=true in "addons-436608"
	I0912 18:23:13.197248   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.197282   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.197171   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.197131   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.197580   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.196682   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.197607   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.197613   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.197625   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.197666   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.196575   11859 addons.go:69] Setting gcp-auth=true in profile "addons-436608"
	I0912 18:23:13.197780   11859 mustload.go:65] Loading cluster: addons-436608
	I0912 18:23:13.197902   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.197917   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.197963   11859 config.go:182] Loaded profile config "addons-436608": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0912 18:23:13.198263   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.196623   11859 addons.go:69] Setting registry=true in profile "addons-436608"
	I0912 18:23:13.198299   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.198315   11859 addons.go:231] Setting addon registry=true in "addons-436608"
	I0912 18:23:13.197131   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.198364   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.197131   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.198406   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.197221   11859 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-436608"
	I0912 18:23:13.198480   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.198820   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.198840   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.200770   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.201117   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.201147   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.216787   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35663
	I0912 18:23:13.217007   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38157
	I0912 18:23:13.217297   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.217342   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.217791   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.217793   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.217824   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.217837   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.218186   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.218187   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.218712   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.218755   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.218712   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.218831   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.222035   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35825
	I0912 18:23:13.222726   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.223295   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.223317   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.223641   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.223809   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.227078   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33747
	I0912 18:23:13.227427   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.227936   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.227957   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.228477   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.229063   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.229097   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.229702   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43079
	I0912 18:23:13.230088   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.230225   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.230660   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.230683   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.231035   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.231197   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.231491   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0912 18:23:13.232916   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.232952   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.247249   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.247339   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46813
	I0912 18:23:13.247478   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34371
	I0912 18:23:13.247555   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I0912 18:23:13.247883   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37003
	I0912 18:23:13.248118   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.248132   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.248290   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.248355   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.248420   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.248553   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.248794   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.248813   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.248919   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.248935   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.249004   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.249240   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.249286   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.249449   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.249465   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.249518   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.249762   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.250122   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.250158   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.250248   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.250283   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.250439   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.250576   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.250596   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.251156   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.251200   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.251388   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.251901   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.251937   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.256463   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34489
	I0912 18:23:13.256631   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0912 18:23:13.257075   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.257173   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.257493   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.257515   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.257634   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.257666   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.257870   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.257997   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.258094   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.258158   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.258380   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44429
	I0912 18:23:13.258867   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.259406   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.259423   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.259793   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.260313   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.260352   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.264254   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I0912 18:23:13.264266   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.264763   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.266682   11859 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0912 18:23:13.265467   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.269457   11859 addons.go:231] Setting addon default-storageclass=true in "addons-436608"
	I0912 18:23:13.269515   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:13.269781   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.269806   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.269980   11859 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0912 18:23:13.269991   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0912 18:23:13.270008   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:13.270066   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.270451   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.270645   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.273143   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0912 18:23:13.273396   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.273634   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.275443   11859 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0912 18:23:13.273865   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.274478   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:13.274590   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.276097   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42151
	I0912 18:23:13.277058   11859 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 18:23:13.277072   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 18:23:13.277093   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:13.277151   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:13.277169   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.277211   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.277421   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:13.277583   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:13.277704   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:13.277807   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.277905   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.278057   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.278204   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.278220   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.279973   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33539
	I0912 18:23:13.280250   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.280453   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.280515   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.281406   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.281425   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.281749   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.281949   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.282002   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.284007   11859 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 18:23:13.282792   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38449
	I0912 18:23:13.283545   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.285895   11859 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 18:23:13.285917   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 18:23:13.285937   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:13.284124   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:13.286010   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.284612   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.284361   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:13.284729   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42383
	I0912 18:23:13.286087   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I0912 18:23:13.285042   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.286204   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.288206   11859 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.2
	I0912 18:23:13.286914   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:13.286953   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.286986   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.287017   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.289118   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.289844   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:13.289870   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.289883   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.289653   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:13.291539   11859 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0912 18:23:13.289804   11859 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0912 18:23:13.290117   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:13.290255   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.290337   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.290356   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:13.290377   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.290467   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37391
	I0912 18:23:13.291050   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34753
	I0912 18:23:13.295108   11859 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0912 18:23:13.293890   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.293939   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.294008   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.294055   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41625
	I0912 18:23:13.294395   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:13.294419   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:13.294450   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.294692   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.296656   11859 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 18:23:13.296758   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.297813   11859 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 18:23:13.297832   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 18:23:13.297852   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:13.297952   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.297954   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0912 18:23:13.297976   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:13.298984   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.298988   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.299043   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:13.299063   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.299070   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.299077   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.299086   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.299468   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.299478   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.299484   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.299743   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.299877   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.299889   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.299942   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.300267   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.300331   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.301104   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:13.301129   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:13.302346   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.304105   11859 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 18:23:13.305439   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.305471   11859 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 18:23:13.304152   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:13.305485   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 18:23:13.305505   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:13.303501   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.305563   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:13.303684   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:13.305585   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.303860   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.303124   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.305639   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:13.307100   11859 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0912 18:23:13.305661   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.304942   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.305682   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:13.305813   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:13.309645   11859 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I0912 18:23:13.311085   11859 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0912 18:23:13.311106   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 18:23:13.309762   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:13.309009   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:13.312476   11859 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 18:23:13.309119   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.309233   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:13.308515   11859 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 18:23:13.310540   11859 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-436608" context rescaled to 1 replicas
	I0912 18:23:13.311124   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:13.311339   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:13.311353   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:13.315671   11859 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 18:23:13.314394   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 18:23:13.314422   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:13.314507   11859 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.50 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0912 18:23:13.314701   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:13.314809   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:13.318971   11859 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 18:23:13.317187   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.317207   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:13.317873   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.317878   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
	I0912 18:23:13.317904   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:13.318220   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:13.321147   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0912 18:23:13.321891   11859 out.go:177] * Verifying Kubernetes components...
	I0912 18:23:13.321911   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:13.322085   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:13.322330   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.323190   11859 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 18:23:13.323213   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.323404   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:13.323406   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.323530   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.323581   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:13.324082   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:13.324686   11859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:23:13.326435   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:13.328005   11859 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 18:23:13.326445   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.326504   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.326648   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:13.326670   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:13.326934   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:13.329323   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:13.330971   11859 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 18:23:13.329512   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:13.329716   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.330017   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:13.333111   11859 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 18:23:13.332076   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:13.332097   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.332114   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:13.334904   11859 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 18:23:13.336221   11859 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 18:23:13.336240   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 18:23:13.335257   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.336256   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:13.335489   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:13.336485   11859 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 18:23:13.336497   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 18:23:13.336512   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:13.337862   11859 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0912 18:23:13.339093   11859 out.go:177]   - Using image docker.io/registry:2.8.1
	I0912 18:23:13.340450   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:13.340454   11859 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 18:23:13.340513   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0912 18:23:13.340530   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:13.339738   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.340582   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:13.340606   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.339917   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:13.339360   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.340643   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:13.340659   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:13.340678   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.340844   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:13.340872   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:13.340989   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:13.341092   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:13.341025   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:13.343012   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.343375   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:13.343401   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:13.343486   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:13.343645   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:13.343816   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:13.343956   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:13.540032   11859 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 18:23:13.603449   11859 node_ready.go:35] waiting up to 6m0s for node "addons-436608" to be "Ready" ...
	I0912 18:23:13.606517   11859 node_ready.go:49] node "addons-436608" has status "Ready":"True"
	I0912 18:23:13.606537   11859 node_ready.go:38] duration metric: took 3.056473ms waiting for node "addons-436608" to be "Ready" ...
	I0912 18:23:13.606545   11859 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:23:13.612910   11859 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pfknn" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:13.706258   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 18:23:13.776056   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 18:23:13.828452   11859 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0912 18:23:13.828471   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0912 18:23:13.874592   11859 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 18:23:13.874621   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 18:23:13.875113   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 18:23:13.884268   11859 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 18:23:13.884285   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 18:23:13.960729   11859 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 18:23:13.960742   11859 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 18:23:13.960757   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 18:23:13.960770   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 18:23:13.962800   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 18:23:13.975203   11859 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 18:23:13.975223   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 18:23:14.021503   11859 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 18:23:14.021523   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 18:23:14.030197   11859 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 18:23:14.030222   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0912 18:23:14.067797   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 18:23:14.119114   11859 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 18:23:14.119139   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 18:23:14.158671   11859 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 18:23:14.158692   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 18:23:14.195427   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 18:23:14.209099   11859 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 18:23:14.209124   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 18:23:14.222114   11859 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 18:23:14.222130   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 18:23:14.232805   11859 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 18:23:14.232825   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 18:23:14.240430   11859 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 18:23:14.240456   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 18:23:14.248046   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 18:23:14.255347   11859 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 18:23:14.255362   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 18:23:14.289783   11859 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 18:23:14.289807   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 18:23:14.318935   11859 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 18:23:14.318958   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 18:23:14.332107   11859 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 18:23:14.332128   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 18:23:14.373876   11859 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 18:23:14.373899   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 18:23:14.398064   11859 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 18:23:14.398088   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 18:23:14.414951   11859 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 18:23:14.414970   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0912 18:23:14.477841   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 18:23:14.499852   11859 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 18:23:14.499872   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 18:23:14.598870   11859 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 18:23:14.598893   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 18:23:14.670873   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 18:23:14.757576   11859 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 18:23:14.757604   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 18:23:14.870161   11859 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 18:23:14.870182   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 18:23:14.979853   11859 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 18:23:14.979883   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 18:23:15.286387   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 18:23:15.366503   11859 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 18:23:15.366523   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 18:23:15.441618   11859 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 18:23:15.441647   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0912 18:23:15.625235   11859 pod_ready.go:102] pod "coredns-5dd5756b68-pfknn" in "kube-system" namespace has status "Ready":"False"
	I0912 18:23:15.658720   11859 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 18:23:15.658747   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 18:23:15.977232   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 18:23:17.305675   11859 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.765600361s)
	I0912 18:23:17.305706   11859 start.go:917] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0912 18:23:17.623365   11859 pod_ready.go:92] pod "coredns-5dd5756b68-pfknn" in "kube-system" namespace has status "Ready":"True"
	I0912 18:23:17.623387   11859 pod_ready.go:81] duration metric: took 4.010455377s waiting for pod "coredns-5dd5756b68-pfknn" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:17.623397   11859 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-436608" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:17.627399   11859 pod_ready.go:92] pod "etcd-addons-436608" in "kube-system" namespace has status "Ready":"True"
	I0912 18:23:17.627414   11859 pod_ready.go:81] duration metric: took 4.01032ms waiting for pod "etcd-addons-436608" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:17.627424   11859 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-436608" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:17.631651   11859 pod_ready.go:92] pod "kube-apiserver-addons-436608" in "kube-system" namespace has status "Ready":"True"
	I0912 18:23:17.631667   11859 pod_ready.go:81] duration metric: took 4.237013ms waiting for pod "kube-apiserver-addons-436608" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:17.631675   11859 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-436608" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:17.635808   11859 pod_ready.go:92] pod "kube-controller-manager-addons-436608" in "kube-system" namespace has status "Ready":"True"
	I0912 18:23:17.635822   11859 pod_ready.go:81] duration metric: took 4.142394ms waiting for pod "kube-controller-manager-addons-436608" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:17.635830   11859 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zpgz8" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:17.639797   11859 pod_ready.go:92] pod "kube-proxy-zpgz8" in "kube-system" namespace has status "Ready":"True"
	I0912 18:23:17.639813   11859 pod_ready.go:81] duration metric: took 3.977146ms waiting for pod "kube-proxy-zpgz8" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:17.639820   11859 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-436608" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:18.022024   11859 pod_ready.go:92] pod "kube-scheduler-addons-436608" in "kube-system" namespace has status "Ready":"True"
	I0912 18:23:18.022044   11859 pod_ready.go:81] duration metric: took 382.218687ms waiting for pod "kube-scheduler-addons-436608" in "kube-system" namespace to be "Ready" ...
	I0912 18:23:18.022051   11859 pod_ready.go:38] duration metric: took 4.415497207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 18:23:18.022068   11859 api_server.go:52] waiting for apiserver process to appear ...
	I0912 18:23:18.022120   11859 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:23:19.866491   11859 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 18:23:19.866523   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:19.869282   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:19.869682   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:19.869721   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:19.869850   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:19.870055   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:19.870202   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:19.870354   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:20.416840   11859 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 18:23:20.630672   11859 addons.go:231] Setting addon gcp-auth=true in "addons-436608"
	I0912 18:23:20.630729   11859 host.go:66] Checking if "addons-436608" exists ...
	I0912 18:23:20.631181   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:20.631224   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:20.645901   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0912 18:23:20.646273   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:20.646713   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:20.646740   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:20.647488   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:20.648110   11859 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:23:20.648158   11859 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:23:20.663040   11859 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42087
	I0912 18:23:20.663488   11859 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:23:20.663886   11859 main.go:141] libmachine: Using API Version  1
	I0912 18:23:20.663905   11859 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:23:20.664279   11859 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:23:20.664496   11859 main.go:141] libmachine: (addons-436608) Calling .GetState
	I0912 18:23:20.666066   11859 main.go:141] libmachine: (addons-436608) Calling .DriverName
	I0912 18:23:20.666322   11859 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 18:23:20.666348   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHHostname
	I0912 18:23:20.669307   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:20.669714   11859 main.go:141] libmachine: (addons-436608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:00:2a", ip: ""} in network mk-addons-436608: {Iface:virbr1 ExpiryTime:2023-09-12 19:22:28 +0000 UTC Type:0 Mac:52:54:00:1d:00:2a Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:addons-436608 Clientid:01:52:54:00:1d:00:2a}
	I0912 18:23:20.669745   11859 main.go:141] libmachine: (addons-436608) DBG | domain addons-436608 has defined IP address 192.168.39.50 and MAC address 52:54:00:1d:00:2a in network mk-addons-436608
	I0912 18:23:20.669879   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHPort
	I0912 18:23:20.670045   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHKeyPath
	I0912 18:23:20.670196   11859 main.go:141] libmachine: (addons-436608) Calling .GetSSHUsername
	I0912 18:23:20.670375   11859 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/addons-436608/id_rsa Username:docker}
	I0912 18:23:22.880651   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.10456609s)
	I0912 18:23:22.880703   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.880716   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.880731   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.174435174s)
	I0912 18:23:22.880770   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.880817   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.880834   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.918012372s)
	I0912 18:23:22.880858   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.880869   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.880780   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.005642337s)
	I0912 18:23:22.880910   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.880920   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.880994   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.813105762s)
	I0912 18:23:22.881033   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.881054   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.881353   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.881377   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.881391   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.881396   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.881401   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.881406   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.881420   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.881429   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.881450   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.881459   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.881475   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.881459   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.881521   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.881529   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.881557   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.881597   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.881596   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.686129706s)
	I0912 18:23:22.881606   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.881616   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.881626   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.881648   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.881663   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.881779   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.881751   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.633680756s)
	I0912 18:23:22.881800   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.881808   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.881819   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.881826   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.881832   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.403966365s)
	I0912 18:23:22.881837   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.881848   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.881857   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.881883   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.881894   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.881948   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.211044243s)
	I0912 18:23:22.881985   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.881994   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.882115   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.595695442s)
	W0912 18:23:22.882144   11859 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 18:23:22.882183   11859 retry.go:31] will retry after 368.279005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 18:23:22.882217   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.882230   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.882281   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.882291   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.882300   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.882309   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.882313   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.882325   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.882333   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.882363   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.882384   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.882393   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.882405   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.882414   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.882874   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.882905   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.882914   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.882923   11859 addons.go:467] Verifying addon ingress=true in "addons-436608"
	I0912 18:23:22.887782   11859 out.go:177] * Verifying ingress addon...
	I0912 18:23:22.883130   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.883153   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.883174   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.883190   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.883210   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.883241   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.883271   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.883289   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.883861   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.883898   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.884034   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.884064   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.889150   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.889165   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.889177   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.889181   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.889195   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.889199   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.889184   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.889210   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.889151   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.889222   11859 addons.go:467] Verifying addon registry=true in "addons-436608"
	I0912 18:23:22.889230   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:22.889241   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.889186   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:22.891518   11859 out.go:177] * Verifying registry addon...
	I0912 18:23:22.890041   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.890054   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.890094   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.890116   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:22.890246   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.890401   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:22.892171   11859 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 18:23:22.892911   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.892936   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.892948   11859 addons.go:467] Verifying addon metrics-server=true in "addons-436608"
	I0912 18:23:22.892890   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:22.894035   11859 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 18:23:22.902389   11859 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 18:23:22.902406   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:22.903365   11859 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 18:23:22.903378   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:22.908774   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:22.909083   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:23.250648   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 18:23:23.415808   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:23.416244   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:23.936525   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:23.936773   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:24.416470   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:24.420555   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:24.658548   11859 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.636404004s)
	I0912 18:23:24.658580   11859 api_server.go:72] duration metric: took 11.341350521s to wait for apiserver process to appear ...
	I0912 18:23:24.658588   11859 api_server.go:88] waiting for apiserver healthz status ...
	I0912 18:23:24.658606   11859 api_server.go:253] Checking apiserver healthz at https://192.168.39.50:8443/healthz ...
	I0912 18:23:24.658638   11859 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.992292093s)
	I0912 18:23:24.660039   11859 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0912 18:23:24.658717   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.681447713s)
	I0912 18:23:24.661276   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:24.661295   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:24.662468   11859 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0912 18:23:24.663635   11859 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 18:23:24.663660   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 18:23:24.661549   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:24.663691   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:24.663703   11859 api_server.go:279] https://192.168.39.50:8443/healthz returned 200:
	ok
	I0912 18:23:24.663708   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:24.661576   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:24.663719   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:24.664094   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:24.664119   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:24.664136   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:24.664147   11859 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-436608"
	I0912 18:23:24.665470   11859 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 18:23:24.665005   11859 api_server.go:141] control plane version: v1.28.1
	I0912 18:23:24.666897   11859 api_server.go:131] duration metric: took 8.299949ms to wait for apiserver health ...
	I0912 18:23:24.666906   11859 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 18:23:24.667410   11859 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 18:23:24.673344   11859 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 18:23:24.673360   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:24.678280   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:24.679306   11859 system_pods.go:59] 17 kube-system pods found
	I0912 18:23:24.679326   11859 system_pods.go:61] "coredns-5dd5756b68-pfknn" [4743ea2b-f2c0-4021-9862-f9a8da481576] Running
	I0912 18:23:24.679335   11859 system_pods.go:61] "csi-hostpath-attacher-0" [cbc89928-bc2e-4097-be9f-fdd14a867dfb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 18:23:24.679343   11859 system_pods.go:61] "csi-hostpath-resizer-0" [b641bff4-093d-4229-a64d-c655574b34cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 18:23:24.679353   11859 system_pods.go:61] "csi-hostpathplugin-p4x5z" [50d8d6c4-1bb4-4453-aa22-1932036e811c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 18:23:24.679370   11859 system_pods.go:61] "etcd-addons-436608" [cc826ee7-170a-407d-b71c-a9e5784b8f34] Running
	I0912 18:23:24.679377   11859 system_pods.go:61] "kube-apiserver-addons-436608" [6b4a5d13-57c3-4039-877a-e6cc0189e350] Running
	I0912 18:23:24.679383   11859 system_pods.go:61] "kube-controller-manager-addons-436608" [a977e392-0c09-45df-8e49-aa0828c5e7bd] Running
	I0912 18:23:24.679398   11859 system_pods.go:61] "kube-ingress-dns-minikube" [3f89ab14-055e-4356-8cc8-ab5d633e85cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 18:23:24.679403   11859 system_pods.go:61] "kube-proxy-zpgz8" [b182ef99-4246-4bc2-8d34-391923555e79] Running
	I0912 18:23:24.679408   11859 system_pods.go:61] "kube-scheduler-addons-436608" [aca326d4-8f23-4d7a-bf63-9c15ba19dc80] Running
	I0912 18:23:24.679414   11859 system_pods.go:61] "metrics-server-7c66d45ddc-jdzwh" [e53337c4-8bb1-400f-b129-a93c921f4de5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 18:23:24.679423   11859 system_pods.go:61] "registry-bwndt" [e99d30e3-1cee-4f98-ae61-60da1507b58c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 18:23:24.679430   11859 system_pods.go:61] "registry-proxy-vqw5g" [a373c566-4d02-4ea2-9a53-2086521429fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 18:23:24.679440   11859 system_pods.go:61] "snapshot-controller-58dbcc7b99-8vf2g" [34a98366-9df9-46de-9751-eef9eaedc867] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 18:23:24.679454   11859 system_pods.go:61] "snapshot-controller-58dbcc7b99-x9s9x" [b92b6042-5cff-4100-90ea-947481424568] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 18:23:24.679466   11859 system_pods.go:61] "storage-provisioner" [79c5cf24-73bb-4c85-8492-cd966c0fcd0c] Running
	I0912 18:23:24.679477   11859 system_pods.go:61] "tiller-deploy-7b677967b9-wfwv2" [d8bd764e-e992-4694-950e-4d96b971fdc4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 18:23:24.679488   11859 system_pods.go:74] duration metric: took 12.575561ms to wait for pod list to return data ...
	I0912 18:23:24.679501   11859 default_sa.go:34] waiting for default service account to be created ...
	I0912 18:23:24.682317   11859 default_sa.go:45] found service account: "default"
	I0912 18:23:24.682334   11859 default_sa.go:55] duration metric: took 2.826392ms for default service account to be created ...
	I0912 18:23:24.682343   11859 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 18:23:24.697539   11859 system_pods.go:86] 17 kube-system pods found
	I0912 18:23:24.697561   11859 system_pods.go:89] "coredns-5dd5756b68-pfknn" [4743ea2b-f2c0-4021-9862-f9a8da481576] Running
	I0912 18:23:24.697569   11859 system_pods.go:89] "csi-hostpath-attacher-0" [cbc89928-bc2e-4097-be9f-fdd14a867dfb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 18:23:24.697576   11859 system_pods.go:89] "csi-hostpath-resizer-0" [b641bff4-093d-4229-a64d-c655574b34cb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 18:23:24.697592   11859 system_pods.go:89] "csi-hostpathplugin-p4x5z" [50d8d6c4-1bb4-4453-aa22-1932036e811c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 18:23:24.697605   11859 system_pods.go:89] "etcd-addons-436608" [cc826ee7-170a-407d-b71c-a9e5784b8f34] Running
	I0912 18:23:24.697611   11859 system_pods.go:89] "kube-apiserver-addons-436608" [6b4a5d13-57c3-4039-877a-e6cc0189e350] Running
	I0912 18:23:24.697622   11859 system_pods.go:89] "kube-controller-manager-addons-436608" [a977e392-0c09-45df-8e49-aa0828c5e7bd] Running
	I0912 18:23:24.697632   11859 system_pods.go:89] "kube-ingress-dns-minikube" [3f89ab14-055e-4356-8cc8-ab5d633e85cc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 18:23:24.697639   11859 system_pods.go:89] "kube-proxy-zpgz8" [b182ef99-4246-4bc2-8d34-391923555e79] Running
	I0912 18:23:24.697644   11859 system_pods.go:89] "kube-scheduler-addons-436608" [aca326d4-8f23-4d7a-bf63-9c15ba19dc80] Running
	I0912 18:23:24.697651   11859 system_pods.go:89] "metrics-server-7c66d45ddc-jdzwh" [e53337c4-8bb1-400f-b129-a93c921f4de5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 18:23:24.697659   11859 system_pods.go:89] "registry-bwndt" [e99d30e3-1cee-4f98-ae61-60da1507b58c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 18:23:24.697666   11859 system_pods.go:89] "registry-proxy-vqw5g" [a373c566-4d02-4ea2-9a53-2086521429fd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 18:23:24.697676   11859 system_pods.go:89] "snapshot-controller-58dbcc7b99-8vf2g" [34a98366-9df9-46de-9751-eef9eaedc867] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 18:23:24.697687   11859 system_pods.go:89] "snapshot-controller-58dbcc7b99-x9s9x" [b92b6042-5cff-4100-90ea-947481424568] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 18:23:24.697698   11859 system_pods.go:89] "storage-provisioner" [79c5cf24-73bb-4c85-8492-cd966c0fcd0c] Running
	I0912 18:23:24.697711   11859 system_pods.go:89] "tiller-deploy-7b677967b9-wfwv2" [d8bd764e-e992-4694-950e-4d96b971fdc4] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 18:23:24.697723   11859 system_pods.go:126] duration metric: took 15.374068ms to wait for k8s-apps to be running ...
	I0912 18:23:24.697737   11859 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 18:23:24.697782   11859 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:23:24.832835   11859 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 18:23:24.832869   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 18:23:24.910747   11859 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 18:23:24.910776   11859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0912 18:23:24.916359   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:24.916632   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:25.096692   11859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 18:23:25.187005   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:25.416669   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:25.417296   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:25.684458   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:25.921200   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:25.922171   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:26.158418   11859 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.460613422s)
	I0912 18:23:26.158451   11859 system_svc.go:56] duration metric: took 1.460711577s WaitForService to wait for kubelet.
	I0912 18:23:26.158461   11859 kubeadm.go:581] duration metric: took 12.841232365s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 18:23:26.158483   11859 node_conditions.go:102] verifying NodePressure condition ...
	I0912 18:23:26.158617   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.907928688s)
	I0912 18:23:26.158666   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:26.158683   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:26.158983   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:26.159003   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:26.159016   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:26.159035   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:26.159048   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:26.159225   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:26.159269   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:26.162172   11859 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0912 18:23:26.162203   11859 node_conditions.go:123] node cpu capacity is 2
	I0912 18:23:26.162216   11859 node_conditions.go:105] duration metric: took 3.726745ms to run NodePressure ...
	I0912 18:23:26.162229   11859 start.go:228] waiting for startup goroutines ...
	I0912 18:23:26.184201   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:26.427205   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:26.428028   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:26.571496   11859 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.47475642s)
	I0912 18:23:26.571545   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:26.571561   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:26.571969   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:26.571990   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:26.572000   11859 main.go:141] libmachine: Making call to close driver server
	I0912 18:23:26.572010   11859 main.go:141] libmachine: (addons-436608) Calling .Close
	I0912 18:23:26.571971   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:26.572296   11859 main.go:141] libmachine: (addons-436608) DBG | Closing plugin on server side
	I0912 18:23:26.572322   11859 main.go:141] libmachine: Successfully made call to close driver server
	I0912 18:23:26.572338   11859 main.go:141] libmachine: Making call to close connection to plugin binary
	I0912 18:23:26.574392   11859 addons.go:467] Verifying addon gcp-auth=true in "addons-436608"
	I0912 18:23:26.576228   11859 out.go:177] * Verifying gcp-auth addon...
	I0912 18:23:26.578408   11859 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 18:23:26.591695   11859 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 18:23:26.591722   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:26.605382   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:26.684846   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:26.915279   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:26.917030   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:27.109104   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:27.184735   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:27.418543   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:27.418567   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:27.610786   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:27.686384   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:27.914146   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:27.914720   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:28.111060   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:28.184733   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:28.413494   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:28.414423   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:28.611528   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:28.684142   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:28.916322   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:28.916501   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:29.109940   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:29.184056   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:29.415059   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:29.415112   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:29.611953   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:29.685257   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:29.913984   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:29.914009   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:30.109760   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:30.186268   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:30.415639   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:30.417852   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:30.610153   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:30.684015   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:30.915682   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:30.916188   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:31.109633   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:31.184818   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:31.414645   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:31.414769   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:31.608897   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:31.684591   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:31.914608   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:31.914732   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:32.112239   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:32.186509   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:32.416867   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:32.417135   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:32.609224   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:32.685337   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:32.914221   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:32.914328   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:33.109215   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:33.186050   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:33.414818   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:33.414880   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:33.609244   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:33.684165   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:33.916813   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:33.918835   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:34.108784   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:34.183745   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:34.412839   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:34.415156   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:34.609477   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:34.684427   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:34.914622   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:34.915466   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:35.109679   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:35.183744   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:35.417304   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:35.417568   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:35.609475   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:35.684280   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:35.915269   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:35.915674   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:36.110226   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:36.184941   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:36.413074   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:36.414192   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:36.609780   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:36.688814   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:36.915429   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:36.915453   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:37.109406   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:37.184542   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:37.414245   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:37.416088   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:37.609259   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:37.686919   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:37.915761   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:37.919555   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:38.109796   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:38.184843   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:38.415308   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:38.418100   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:38.608630   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:38.684326   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:38.915738   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:38.915939   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:39.109435   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:39.185155   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:39.414527   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:39.414830   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:39.609841   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:39.686328   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:39.917038   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:39.917422   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:40.109894   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:40.183980   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:40.419419   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:40.419695   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:40.609693   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:40.684695   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:40.917533   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:40.917833   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:41.112976   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:41.184508   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:41.413915   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:41.414295   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:41.609643   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:41.684019   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:41.916340   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:41.916834   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:42.109902   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:42.184776   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:42.415729   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:42.415980   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:42.610532   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:42.683346   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:42.916241   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:42.916329   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:43.109991   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:43.184748   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:43.415191   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:43.418358   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:43.610047   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:43.684049   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:43.914689   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:43.915498   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:44.110199   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:44.184102   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:44.414075   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:44.415725   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:44.609843   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:44.683916   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:44.915609   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:44.916187   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:45.109779   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:45.185212   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:45.413837   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:45.414176   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:45.611465   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:45.688433   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:45.914936   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:45.915762   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:46.108884   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:46.183909   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:46.414007   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:46.414046   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:46.609840   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:46.684883   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:46.913969   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:46.913969   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:47.109023   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:47.183591   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:47.415070   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:47.416471   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:47.610260   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:47.684050   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:47.919025   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:47.920383   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:48.110133   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:48.257717   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:48.504755   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 18:23:48.505361   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:48.608983   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:48.683707   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:48.914988   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:48.915059   11859 kapi.go:107] duration metric: took 26.021020906s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 18:23:49.109976   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:49.183949   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:49.413565   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:49.610159   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:49.684605   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:49.913412   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:50.109348   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:50.184226   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:50.414507   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:50.609878   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:50.684922   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:50.914669   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:51.110050   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:51.186178   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:51.413616   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:51.609869   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:51.685709   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:51.914377   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:52.109823   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:52.185496   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:52.413893   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:52.608819   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:52.686735   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:52.913159   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:53.109851   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:53.185427   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:53.414322   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:53.608939   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:53.684876   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:53.981945   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:54.110358   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:54.184854   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:54.414835   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:54.609780   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:54.683962   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:54.913718   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:55.110547   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:55.184468   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:55.413733   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:55.610018   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:55.684009   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:55.914044   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:56.108996   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:56.183711   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:56.415479   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:56.832193   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:56.834162   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:56.914072   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:57.110411   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:57.185290   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:57.418878   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:57.609876   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:57.685264   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:57.916301   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:58.110278   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:58.184822   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:58.414225   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:58.608739   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:58.683490   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:58.996311   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:59.169779   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:59.185149   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:59.414812   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:23:59.609741   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:23:59.683296   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:23:59.913930   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:00.108749   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:00.184997   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:00.412993   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:00.608590   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:00.684349   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:00.913664   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:01.110491   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:01.183383   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:01.414231   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:01.863910   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:01.875151   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:01.941583   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:02.122420   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:02.185046   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:02.414714   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:02.609906   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:02.683916   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:02.917717   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:03.109224   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:03.184922   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:03.415763   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:03.610113   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:03.684659   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:03.915111   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:04.108775   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:04.184146   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:04.413744   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:04.609803   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:04.683913   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:04.913329   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:05.110454   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:05.184578   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:05.484308   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:05.609347   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:05.686617   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:05.914980   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:06.109408   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:06.185039   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:06.417963   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:06.609719   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:06.683810   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:06.915419   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:07.108792   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:07.184147   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:07.413578   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:07.609820   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:07.684928   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:08.141247   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:08.142032   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:08.184107   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:08.413624   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:08.609181   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:08.684816   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:08.914156   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:09.110583   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:09.183738   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:09.413666   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:09.609862   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:09.683799   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:09.915838   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:10.112323   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:10.184407   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:10.414201   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:10.608766   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:10.685322   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:10.914249   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:11.109007   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:11.184228   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:11.413932   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:11.608911   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:11.684628   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:11.914653   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:12.109852   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:12.184063   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:12.415250   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:12.609264   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:12.684708   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:12.916917   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:13.108935   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:13.185906   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:13.414099   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:13.609607   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:13.684508   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:13.914414   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:14.109566   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:14.184343   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:14.414317   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:14.610188   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:14.683924   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:14.913467   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:15.108608   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:15.185370   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:15.414386   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:15.609384   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:15.684608   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:15.915055   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:16.109641   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:16.183747   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:16.413526   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:16.610663   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:16.683846   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:16.914056   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:17.108439   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:17.184296   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:17.414228   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:17.608830   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:17.683294   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 18:24:17.914359   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:18.114133   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:18.184338   11859 kapi.go:107] duration metric: took 53.516925409s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 18:24:18.414006   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:18.608453   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:18.913715   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:19.109060   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:19.413650   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:19.609670   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:19.917206   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:20.111303   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:20.413751   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:20.609970   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:20.914350   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:21.110803   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:21.414098   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:21.609379   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:21.915319   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:22.119745   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:22.413826   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:22.610831   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:22.914921   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:23.112168   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:23.415733   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:23.610832   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:23.915165   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:24.110601   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:24.415285   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:24.610114   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:24.914319   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:25.109660   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:25.414354   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:25.609592   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:25.913979   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:26.108831   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:26.415796   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:26.609595   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:26.913655   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:27.109458   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:27.419462   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:27.609626   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:27.913750   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:28.110502   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:28.413882   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:28.609618   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:28.914237   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:29.109185   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:29.413455   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:29.609987   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:29.914050   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:30.108947   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:30.415478   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:30.610296   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:30.913463   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:31.109189   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:31.414902   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:31.608851   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:31.937706   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:32.320939   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:32.414741   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:32.610970   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:32.914771   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:33.109117   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:33.414147   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:33.611228   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:33.915212   11859 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 18:24:34.109706   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:34.414034   11859 kapi.go:107] duration metric: took 1m11.521860467s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 18:24:34.609982   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:35.110381   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:35.609681   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:36.110185   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:36.609771   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:37.113396   11859 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 18:24:37.608913   11859 kapi.go:107] duration metric: took 1m11.030497017s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 18:24:37.610889   11859 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-436608 cluster.
	I0912 18:24:37.612241   11859 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 18:24:37.613591   11859 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 18:24:37.614899   11859 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, default-storageclass, ingress-dns, helm-tiller, metrics-server, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0912 18:24:37.616056   11859 addons.go:502] enable addons completed in 1m24.41958916s: enabled=[cloud-spanner storage-provisioner default-storageclass ingress-dns helm-tiller metrics-server inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0912 18:24:37.616082   11859 start.go:233] waiting for cluster config update ...
	I0912 18:24:37.616095   11859 start.go:242] writing updated cluster config ...
	I0912 18:24:37.616310   11859 ssh_runner.go:195] Run: rm -f paused
	I0912 18:24:37.663093   11859 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0912 18:24:37.664972   11859 out.go:177] * Done! kubectl is now configured to use "addons-436608" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID
	37c7e084500fd       98f6c3b32d565       7 seconds ago        Exited              helm-test                                0                   0481bc989ac7d
	d35a52af42e49       3d193a254b78c       10 seconds ago       Running             headlamp                                 0                   7a602cd0f2a65
	eb6e8d3a63245       6d2a98b274382       19 seconds ago       Running             gcp-auth                                 0                   c61fb2d3443ed
	d5cd6809944e3       47fe3698318ff       22 seconds ago       Running             controller                               0                   a8a68faf0aab1
	90408eb5e081c       7e7451bb70423       36 seconds ago       Exited              patch                                    2                   a6182b9435f82
	29b926c7631aa       738351fd438f0       38 seconds ago       Running             csi-snapshotter                          0                   74b5484f82112
	7d761d278b0ec       931dbfd16f87c       40 seconds ago       Running             csi-provisioner                          0                   74b5484f82112
	3f43690ba27b9       e899260153aed       42 seconds ago       Running             liveness-probe                           0                   74b5484f82112
	5f0df924044b8       e255e073c508c       43 seconds ago       Running             hostpath                                 0                   74b5484f82112
	1575b49ac8128       88ef14a257f42       44 seconds ago       Running             node-driver-registrar                    0                   74b5484f82112
	6f93d11246cce       7e7451bb70423       45 seconds ago       Exited              patch                                    0                   c488c76ebdbdd
	4ae3c99c02242       7e7451bb70423       46 seconds ago       Exited              create                                   0                   9515f2ac74d26
	487ad32d639e0       19a639eda60f0       46 seconds ago       Running             csi-resizer                              0                   3f106d64680c3
	8023ea3cc5fec       a1ed5895ba635       47 seconds ago       Running             csi-external-health-monitor-controller   0                   74b5484f82112
	7988db3904191       7e7451bb70423       49 seconds ago       Exited              create                                   0                   0d86817298638
	a0016c82270d9       59cbb42146a37       49 seconds ago       Running             csi-attacher                             0                   c3c1ccb0989c8
	97ab09c12d110       aa61ee9c70bc4       53 seconds ago       Running             volume-snapshot-controller               0                   0172bd50f7733
	e1fb958206129       aa61ee9c70bc4       53 seconds ago       Running             volume-snapshot-controller               0                   3c803fb6677f9
	82bb740f3ad18       d77e118adfd12       55 seconds ago       Running             gadget                                   0                   248466972dc82
	a5cfc84782b85       1499ed4fbd0aa       About a minute ago   Running             minikube-ingress-dns                     0                   da4a53d0a5fb7
	1323bb57ed9b5       6e38f40d628db       About a minute ago   Running             storage-provisioner                      0                   dba780d271f6c
	36d27a29fc8b9       ead0a4a53df89       About a minute ago   Running             coredns                                  0                   84505e2a7b482
	22cdfd4f7b39b       6cdbabde3874e       About a minute ago   Running             kube-proxy                               0                   e33a3fffc7c7f
	06399101e2b9f       73deb9a3f7025       2 minutes ago        Running             etcd                                     0                   77b5e6fe7f12a
	a318a833e8cde       b462ce0c8b1ff       2 minutes ago        Running             kube-scheduler                           0                   375325cc60617
	1268ab886a12b       821b3dfea27be       2 minutes ago        Running             kube-controller-manager                  0                   25727345b035a
	51721181c4b9d       5c801295c21d0       2 minutes ago        Running             kube-apiserver                           0                   6bd82c2de4d09
	
	* 
	* ==> containerd <==
	* -- Journal begins at Tue 2023-09-12 18:22:25 UTC, ends at Tue 2023-09-12 18:24:56 UTC. --
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.777427370Z" level=info msg="shim disconnected" id=ae2febcc5797b78d92392d044ba5decc18af0a32bfc9ee9c3ec5b54831b0a1b6 namespace=k8s.io
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.777492983Z" level=warning msg="cleaning up after shim disconnected" id=ae2febcc5797b78d92392d044ba5decc18af0a32bfc9ee9c3ec5b54831b0a1b6 namespace=k8s.io
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.777504574Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.799701331Z" level=info msg="shim disconnected" id=2655bcb0ca6c30a28e2b530f75559cc92c5da34c9cf45f7990276595ce1c8d2e namespace=k8s.io
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.799878629Z" level=warning msg="cleaning up after shim disconnected" id=2655bcb0ca6c30a28e2b530f75559cc92c5da34c9cf45f7990276595ce1c8d2e namespace=k8s.io
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.800005634Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.815001043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx,Uid:620956ed-8225-423b-bcaf-ba968f8e638a,Namespace:default,Attempt:0,} returns sandbox id \"27be36813a03cea663f332ee8873f822eb079ddb377604e94ae10ed785a07732\""
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.820967371Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.829089235Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.891638156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.891882155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.891908857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 12 18:24:54 addons-436608 containerd[682]: time="2023-09-12T18:24:54.891975044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.057878980Z" level=info msg="TearDown network for sandbox \"2655bcb0ca6c30a28e2b530f75559cc92c5da34c9cf45f7990276595ce1c8d2e\" successfully"
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.058035836Z" level=info msg="StopPodSandbox for \"2655bcb0ca6c30a28e2b530f75559cc92c5da34c9cf45f7990276595ce1c8d2e\" returns successfully"
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.102756576Z" level=info msg="TearDown network for sandbox \"ae2febcc5797b78d92392d044ba5decc18af0a32bfc9ee9c3ec5b54831b0a1b6\" successfully"
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.102793786Z" level=info msg="StopPodSandbox for \"ae2febcc5797b78d92392d044ba5decc18af0a32bfc9ee9c3ec5b54831b0a1b6\" returns successfully"
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.349531039Z" level=info msg="RemoveContainer for \"e4a3b97e8fdaed775fafe1b8be836fdef1c4c11cc295eec7fbf51ff8e5f6f3f6\""
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.357233823Z" level=info msg="RemoveContainer for \"e4a3b97e8fdaed775fafe1b8be836fdef1c4c11cc295eec7fbf51ff8e5f6f3f6\" returns successfully"
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.363631650Z" level=error msg="ContainerStatus for \"e4a3b97e8fdaed775fafe1b8be836fdef1c4c11cc295eec7fbf51ff8e5f6f3f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4a3b97e8fdaed775fafe1b8be836fdef1c4c11cc295eec7fbf51ff8e5f6f3f6\": not found"
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.366008895Z" level=info msg="RemoveContainer for \"b5a9a08a802a721ba761e567ff6f738f974f8e5e90dbfacadb0e4c90e7857757\""
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.376663254Z" level=info msg="RemoveContainer for \"b5a9a08a802a721ba761e567ff6f738f974f8e5e90dbfacadb0e4c90e7857757\" returns successfully"
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.377504076Z" level=error msg="ContainerStatus for \"b5a9a08a802a721ba761e567ff6f738f974f8e5e90dbfacadb0e4c90e7857757\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a9a08a802a721ba761e567ff6f738f974f8e5e90dbfacadb0e4c90e7857757\": not found"
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.504425040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:task-pv-pod,Uid:ae629cef-d99e-420a-ba98-1b27cdfa70ee,Namespace:default,Attempt:0,} returns sandbox id \"5dd2e566049354463b90dd98508e33c1c5f58e8a6d92d16c5009aa6f6c5e3d15\""
	Sep 12 18:24:55 addons-436608 containerd[682]: time="2023-09-12T18:24:55.529094221Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	* 
	* ==> coredns [36d27a29fc8b98827cbff58f04c2dbb6017ee3b476c99cf685fed06ab346e27f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:34838 - 18076 "HINFO IN 5452589275632361412.7922865526623654575. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006658248s
	[INFO] 10.244.0.18:36829 - 11218 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000300487s
	[INFO] 10.244.0.18:58554 - 31697 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000065243s
	[INFO] 10.244.0.18:55426 - 54421 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102729s
	[INFO] 10.244.0.18:49040 - 43091 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.002164797s
	[INFO] 10.244.0.18:43906 - 35011 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009614s
	[INFO] 10.244.0.18:40479 - 42917 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000068095s
	[INFO] 10.244.0.18:49135 - 9440 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000371213s
	[INFO] 10.244.0.18:47568 - 30756 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000762974s
	[INFO] 10.244.0.21:49086 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000356213s
	[INFO] 10.244.0.21:49446 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126875s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-436608
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-436608
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fcf473f700c1ee60c8afd1005162a3d3f02aa75
	                    minikube.k8s.io/name=addons-436608
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_12T18_23_00_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-436608
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-436608"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 18:22:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-436608
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 18:24:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 18:24:31 +0000   Tue, 12 Sep 2023 18:22:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 18:24:31 +0000   Tue, 12 Sep 2023 18:22:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 18:24:31 +0000   Tue, 12 Sep 2023 18:22:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 18:24:31 +0000   Tue, 12 Sep 2023 18:23:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    addons-436608
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3876ad1222a4a45b1adc8734188cd06
	  System UUID:                a3876ad1-222a-4a45-b1ad-c8734188cd06
	  Boot ID:                    2aa96e18-ddf7-4715-8d02-9558ef873bfb
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.3
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  default                     task-pv-pod                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  gadget                      gadget-2h7fx                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  gcp-auth                    gcp-auth-d4c87556c-jzm5s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  headlamp                    headlamp-699c48fb74-2x8cb                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  ingress-nginx               ingress-nginx-controller-798b8b85d7-gftxk    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         94s
	  kube-system                 coredns-5dd5756b68-pfknn                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     103s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 csi-hostpathplugin-p4x5z                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 etcd-addons-436608                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         117s
	  kube-system                 kube-apiserver-addons-436608                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-controller-manager-addons-436608        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-zpgz8                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-scheduler-addons-436608                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 snapshot-controller-58dbcc7b99-8vf2g         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 snapshot-controller-58dbcc7b99-x9s9x         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m5s)  kubelet          Node addons-436608 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m5s)  kubelet          Node addons-436608 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m5s)  kubelet          Node addons-436608 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node addons-436608 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node addons-436608 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node addons-436608 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                116s                 kubelet          Node addons-436608 status is now: NodeReady
	  Normal  RegisteredNode           104s                 node-controller  Node addons-436608 event: Registered Node addons-436608 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.097131] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.313383] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.818487] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.138029] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.979067] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.447814] systemd-fstab-generator[549]: Ignoring "noauto" for root device
	[  +0.098354] systemd-fstab-generator[560]: Ignoring "noauto" for root device
	[  +0.136601] systemd-fstab-generator[573]: Ignoring "noauto" for root device
	[  +0.111225] systemd-fstab-generator[584]: Ignoring "noauto" for root device
	[  +0.237743] systemd-fstab-generator[611]: Ignoring "noauto" for root device
	[  +5.163827] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +5.448894] systemd-fstab-generator[832]: Ignoring "noauto" for root device
	[  +8.217844] systemd-fstab-generator[1196]: Ignoring "noauto" for root device
	[Sep12 18:23] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.025863] kauditd_printk_skb: 36 callbacks suppressed
	[ +13.900019] kauditd_printk_skb: 33 callbacks suppressed
	[Sep12 18:24] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.243948] kauditd_printk_skb: 2 callbacks suppressed
	[ +17.439917] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.620106] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.842947] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.400166] kauditd_printk_skb: 23 callbacks suppressed
	
	* 
	* ==> etcd [06399101e2b9f297baa6ce63ac0c1e43e3fda63a47698830bc9b58a0f0082e6b] <==
	* {"level":"info","ts":"2023-09-12T18:23:56.824647Z","caller":"traceutil/trace.go:171","msg":"trace[340392727] linearizableReadLoop","detail":"{readStateIndex:925; appliedIndex:924; }","duration":"221.523613ms","start":"2023-09-12T18:23:56.60311Z","end":"2023-09-12T18:23:56.824633Z","steps":["trace[340392727] 'read index received'  (duration: 221.402313ms)","trace[340392727] 'applied index is now lower than readState.Index'  (duration: 120.959µs)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T18:23:56.824815Z","caller":"traceutil/trace.go:171","msg":"trace[1689275234] transaction","detail":"{read_only:false; response_revision:901; number_of_response:1; }","duration":"285.41667ms","start":"2023-09-12T18:23:56.53939Z","end":"2023-09-12T18:23:56.824807Z","steps":["trace[1689275234] 'process raft request'  (duration: 285.164722ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:23:56.825017Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.913519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10532"}
	{"level":"info","ts":"2023-09-12T18:23:56.825039Z","caller":"traceutil/trace.go:171","msg":"trace[1437650482] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:901; }","duration":"221.946634ms","start":"2023-09-12T18:23:56.603086Z","end":"2023-09-12T18:23:56.825032Z","steps":["trace[1437650482] 'agreement among raft nodes before linearized reading'  (duration: 221.877476ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:23:56.825314Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.665969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78047"}
	{"level":"info","ts":"2023-09-12T18:23:56.825379Z","caller":"traceutil/trace.go:171","msg":"trace[1456270504] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:901; }","duration":"148.740522ms","start":"2023-09-12T18:23:56.676632Z","end":"2023-09-12T18:23:56.825373Z","steps":["trace[1456270504] 'agreement among raft nodes before linearized reading'  (duration: 148.48654ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-12T18:23:58.989339Z","caller":"traceutil/trace.go:171","msg":"trace[1488137121] transaction","detail":"{read_only:false; response_revision:905; number_of_response:1; }","duration":"152.523789ms","start":"2023-09-12T18:23:58.836801Z","end":"2023-09-12T18:23:58.989324Z","steps":["trace[1488137121] 'process raft request'  (duration: 152.055481ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-12T18:24:01.85566Z","caller":"traceutil/trace.go:171","msg":"trace[1593940121] linearizableReadLoop","detail":"{readStateIndex:937; appliedIndex:936; }","duration":"251.535875ms","start":"2023-09-12T18:24:01.60411Z","end":"2023-09-12T18:24:01.855646Z","steps":["trace[1593940121] 'read index received'  (duration: 251.412328ms)","trace[1593940121] 'applied index is now lower than readState.Index'  (duration: 123.184µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-12T18:24:01.856063Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.966903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10532"}
	{"level":"info","ts":"2023-09-12T18:24:01.856226Z","caller":"traceutil/trace.go:171","msg":"trace[1494030008] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:911; }","duration":"252.039458ms","start":"2023-09-12T18:24:01.604079Z","end":"2023-09-12T18:24:01.856118Z","steps":["trace[1494030008] 'agreement among raft nodes before linearized reading'  (duration: 251.891648ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:24:01.856312Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.841401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78047"}
	{"level":"info","ts":"2023-09-12T18:24:01.856342Z","caller":"traceutil/trace.go:171","msg":"trace[667029762] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:911; }","duration":"179.879533ms","start":"2023-09-12T18:24:01.676455Z","end":"2023-09-12T18:24:01.856335Z","steps":["trace[667029762] 'agreement among raft nodes before linearized reading'  (duration: 179.628828ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-12T18:24:01.856409Z","caller":"traceutil/trace.go:171","msg":"trace[15737211] transaction","detail":"{read_only:false; response_revision:911; number_of_response:1; }","duration":"286.614095ms","start":"2023-09-12T18:24:01.569787Z","end":"2023-09-12T18:24:01.856401Z","steps":["trace[15737211] 'process raft request'  (duration: 285.776871ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:24:01.856065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.452506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-436608\" ","response":"range_response_count:1 size:5220"}
	{"level":"info","ts":"2023-09-12T18:24:01.856525Z","caller":"traceutil/trace.go:171","msg":"trace[253617267] range","detail":"{range_begin:/registry/minions/addons-436608; range_end:; response_count:1; response_revision:911; }","duration":"190.928601ms","start":"2023-09-12T18:24:01.66559Z","end":"2023-09-12T18:24:01.856518Z","steps":["trace[253617267] 'agreement among raft nodes before linearized reading'  (duration: 190.347903ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:24:05.476991Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.146632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-12T18:24:05.477048Z","caller":"traceutil/trace.go:171","msg":"trace[228962029] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:938; }","duration":"189.216386ms","start":"2023-09-12T18:24:05.287819Z","end":"2023-09-12T18:24:05.477036Z","steps":["trace[228962029] 'range keys from in-memory index tree'  (duration: 189.05298ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-12T18:24:08.132467Z","caller":"traceutil/trace.go:171","msg":"trace[238996097] linearizableReadLoop","detail":"{readStateIndex:981; appliedIndex:980; }","duration":"225.529947ms","start":"2023-09-12T18:24:07.906923Z","end":"2023-09-12T18:24:08.132453Z","steps":["trace[238996097] 'read index received'  (duration: 225.357757ms)","trace[238996097] 'applied index is now lower than readState.Index'  (duration: 171.701µs)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T18:24:08.132539Z","caller":"traceutil/trace.go:171","msg":"trace[1739121476] transaction","detail":"{read_only:false; response_revision:954; number_of_response:1; }","duration":"266.234757ms","start":"2023-09-12T18:24:07.866299Z","end":"2023-09-12T18:24:08.132534Z","steps":["trace[1739121476] 'process raft request'  (duration: 266.028855ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:24:08.132667Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"225.769991ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14199"}
	{"level":"info","ts":"2023-09-12T18:24:08.132685Z","caller":"traceutil/trace.go:171","msg":"trace[1845393706] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:954; }","duration":"225.800971ms","start":"2023-09-12T18:24:07.906878Z","end":"2023-09-12T18:24:08.132679Z","steps":["trace[1845393706] 'agreement among raft nodes before linearized reading'  (duration: 225.737823ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:24:32.313084Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"298.376516ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-09-12T18:24:32.313143Z","caller":"traceutil/trace.go:171","msg":"trace[1625029510] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:1061; }","duration":"298.47131ms","start":"2023-09-12T18:24:32.014661Z","end":"2023-09-12T18:24:32.313133Z","steps":["trace[1625029510] 'count revisions from in-memory index tree'  (duration: 298.302477ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T18:24:32.313372Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.36444ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10846"}
	{"level":"info","ts":"2023-09-12T18:24:32.313392Z","caller":"traceutil/trace.go:171","msg":"trace[1072126617] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1061; }","duration":"210.421549ms","start":"2023-09-12T18:24:32.102964Z","end":"2023-09-12T18:24:32.313386Z","steps":["trace[1072126617] 'range keys from in-memory index tree'  (duration: 210.270102ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [eb6e8d3a632455fec56bd9c9a9a2cce06052513da30b07bdf6fd4aa3af93397d] <==
	* 2023/09/12 18:24:36 GCP Auth Webhook started!
	2023/09/12 18:24:38 Ready to marshal response ...
	2023/09/12 18:24:38 Ready to write response ...
	2023/09/12 18:24:38 Ready to marshal response ...
	2023/09/12 18:24:38 Ready to write response ...
	2023/09/12 18:24:39 Ready to marshal response ...
	2023/09/12 18:24:39 Ready to write response ...
	2023/09/12 18:24:42 Ready to marshal response ...
	2023/09/12 18:24:42 Ready to write response ...
	2023/09/12 18:24:47 Ready to marshal response ...
	2023/09/12 18:24:47 Ready to write response ...
	2023/09/12 18:24:53 Ready to marshal response ...
	2023/09/12 18:24:53 Ready to write response ...
	2023/09/12 18:24:54 Ready to marshal response ...
	2023/09/12 18:24:54 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:24:56 up 2 min,  0 users,  load average: 2.02, 1.08, 0.42
	Linux addons-436608 5.10.57 #1 SMP Thu Sep 7 15:04:01 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [51721181c4b9dbc76ea55c93b94f84ca4b3793ae8a47c6b4485f147dbed11736] <==
	* I0912 18:23:24.349594       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.111.33.134"}
	I0912 18:23:24.373899       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0912 18:23:24.549698       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.99.237.95"}
	W0912 18:23:25.514674       1 aggregator.go:164] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0912 18:23:26.384731       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.227.240"}
	W0912 18:23:44.890846       1 handler_proxy.go:93] no RequestInfo found in the context
	E0912 18:23:44.890928       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0912 18:23:44.891484       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.5.195:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.5.195:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.5.195:443: connect: connection refused
	I0912 18:23:44.891921       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.109.5.195:443: connect: connection refused
	I0912 18:23:44.891978       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0912 18:23:44.892730       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.5.195:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.5.195:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.5.195:443: connect: connection refused
	E0912 18:23:44.897988       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.5.195:443/apis/metrics.k8s.io/v1beta1: Get "https://10.109.5.195:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.109.5.195:443: connect: connection refused
	I0912 18:23:44.990317       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0912 18:23:56.829022       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0912 18:24:38.942611       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.182.201"}
	E0912 18:24:43.392370       1 controller.go:159] removing "v1beta1.metrics.k8s.io" from AggregationController failed with: resource not found
	E0912 18:24:45.906597       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0912 18:24:45.906639       1 handler_proxy.go:93] no RequestInfo found in the context
	E0912 18:24:45.906686       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0912 18:24:45.906694       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 18:24:53.454798       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0912 18:24:53.705938       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.177.75"}
	
	* 
	* ==> kube-controller-manager [1268ab886a12bc7b121c95c318c812a0839000a37b3399fb7a1d3a771ea35d84] <==
	* I0912 18:24:37.177760       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="13.44778ms"
	I0912 18:24:37.178525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="47.556µs"
	I0912 18:24:38.969863       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-699c48fb74 to 1"
	I0912 18:24:38.992536       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-699c48fb74-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I0912 18:24:38.999653       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="29.236448ms"
	E0912 18:24:38.999703       1 replica_set.go:557] sync "headlamp/headlamp-699c48fb74" failed with pods "headlamp-699c48fb74-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0912 18:24:39.022536       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-699c48fb74-2x8cb"
	I0912 18:24:39.041171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="39.374081ms"
	I0912 18:24:39.070645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="29.243192ms"
	I0912 18:24:39.081653       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="10.954008ms"
	I0912 18:24:39.082043       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="203.067µs"
	I0912 18:24:43.020765       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0912 18:24:43.028179       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0912 18:24:43.089806       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0912 18:24:43.099980       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0912 18:24:43.411361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="5.927µs"
	I0912 18:24:46.224279       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="14.662655ms"
	I0912 18:24:46.224487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="66.67µs"
	I0912 18:24:46.978773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="22.745995ms"
	I0912 18:24:46.979819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="68.67µs"
	I0912 18:24:49.208609       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-7d49f968d9" duration="4.141µs"
	I0912 18:24:50.979954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="6.215µs"
	I0912 18:24:51.912922       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0912 18:24:53.325447       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0912 18:24:54.167732       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="8.547µs"
	
	* 
	* ==> kube-proxy [22cdfd4f7b39b08dcff0c626e993c90f5b1d2c452b186c21523b9ba0327bbece] <==
	* I0912 18:23:13.812273       1 server_others.go:69] "Using iptables proxy"
	I0912 18:23:13.828665       1 node.go:141] Successfully retrieved node IP: 192.168.39.50
	I0912 18:23:13.901464       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0912 18:23:13.901485       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0912 18:23:13.904124       1 server_others.go:152] "Using iptables Proxier"
	I0912 18:23:13.904163       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0912 18:23:13.904409       1 server.go:846] "Version info" version="v1.28.1"
	I0912 18:23:13.904419       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 18:23:13.906902       1 config.go:188] "Starting service config controller"
	I0912 18:23:13.906914       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0912 18:23:13.906932       1 config.go:97] "Starting endpoint slice config controller"
	I0912 18:23:13.906935       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0912 18:23:13.907731       1 config.go:315] "Starting node config controller"
	I0912 18:23:13.907738       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0912 18:23:14.007652       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0912 18:23:14.007667       1 shared_informer.go:318] Caches are synced for service config
	I0912 18:23:14.007863       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [a318a833e8cdecc460a536da9d70ace41a2d01134bd3530df832502ee7186047] <==
	* W0912 18:22:56.983578       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 18:22:56.983713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0912 18:22:56.983848       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 18:22:56.983958       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0912 18:22:56.989390       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 18:22:56.989530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0912 18:22:57.826342       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 18:22:57.826850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0912 18:22:57.846142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 18:22:57.846167       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0912 18:22:57.866899       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 18:22:57.866947       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0912 18:22:57.867000       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 18:22:57.867032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0912 18:22:57.890279       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 18:22:57.890327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0912 18:22:57.976673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 18:22:57.976725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0912 18:22:58.011944       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 18:22:58.012013       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 18:22:58.110995       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 18:22:58.111292       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0912 18:22:58.222669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 18:22:58.222721       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0912 18:22:59.763097       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-09-12 18:22:25 UTC, ends at Tue 2023-09-12 18:24:56 UTC. --
	Sep 12 18:24:53 addons-436608 kubelet[1203]: I0912 18:24:53.742227    1203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/620956ed-8225-423b-bcaf-ba968f8e638a-gcp-creds\") pod \"nginx\" (UID: \"620956ed-8225-423b-bcaf-ba968f8e638a\") " pod="default/nginx"
	Sep 12 18:24:54 addons-436608 kubelet[1203]: I0912 18:24:54.017400    1203 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d8bd764e-e992-4694-950e-4d96b971fdc4" path="/var/lib/kubelet/pods/d8bd764e-e992-4694-950e-4d96b971fdc4/volumes"
	Sep 12 18:24:54 addons-436608 kubelet[1203]: I0912 18:24:54.017910    1203 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fda7a5d8-0bc8-43c4-9d68-dcb2779962bf" path="/var/lib/kubelet/pods/fda7a5d8-0bc8-43c4-9d68-dcb2779962bf/volumes"
	Sep 12 18:24:54 addons-436608 kubelet[1203]: I0912 18:24:54.301606    1203 topology_manager.go:215] "Topology Admit Handler" podUID="ae629cef-d99e-420a-ba98-1b27cdfa70ee" podNamespace="default" podName="task-pv-pod"
	Sep 12 18:24:54 addons-436608 kubelet[1203]: I0912 18:24:54.341393    1203 scope.go:117] "RemoveContainer" containerID="abf982841eb76fabc203855eb17c4b37b6460db9ee65b3b3e10f900da3a1f3f5"
	Sep 12 18:24:54 addons-436608 kubelet[1203]: I0912 18:24:54.446336    1203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ae629cef-d99e-420a-ba98-1b27cdfa70ee-gcp-creds\") pod \"task-pv-pod\" (UID: \"ae629cef-d99e-420a-ba98-1b27cdfa70ee\") " pod="default/task-pv-pod"
	Sep 12 18:24:54 addons-436608 kubelet[1203]: I0912 18:24:54.446380    1203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6qgw\" (UniqueName: \"kubernetes.io/projected/ae629cef-d99e-420a-ba98-1b27cdfa70ee-kube-api-access-k6qgw\") pod \"task-pv-pod\" (UID: \"ae629cef-d99e-420a-ba98-1b27cdfa70ee\") " pod="default/task-pv-pod"
	Sep 12 18:24:54 addons-436608 kubelet[1203]: I0912 18:24:54.446423    1203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-37e038b2-0f3c-4ff4-a7b1-c17bc01cf822\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^aadc9b56-5199-11ee-b4f4-1ef2bc6871f7\") pod \"task-pv-pod\" (UID: \"ae629cef-d99e-420a-ba98-1b27cdfa70ee\") " pod="default/task-pv-pod"
	Sep 12 18:24:54 addons-436608 kubelet[1203]: I0912 18:24:54.576714    1203 operation_generator.go:661] "MountVolume.MountDevice succeeded for volume \"pvc-37e038b2-0f3c-4ff4-a7b1-c17bc01cf822\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^aadc9b56-5199-11ee-b4f4-1ef2bc6871f7\") pod \"task-pv-pod\" (UID: \"ae629cef-d99e-420a-ba98-1b27cdfa70ee\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/32bc86348d1e4064e92093c8ed529da4c56bf3452326b533d187f61b30850e24/globalmount\"" pod="default/task-pv-pod"
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.156172    1203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45vkr\" (UniqueName: \"kubernetes.io/projected/e99d30e3-1cee-4f98-ae61-60da1507b58c-kube-api-access-45vkr\") pod \"e99d30e3-1cee-4f98-ae61-60da1507b58c\" (UID: \"e99d30e3-1cee-4f98-ae61-60da1507b58c\") "
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.169389    1203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99d30e3-1cee-4f98-ae61-60da1507b58c-kube-api-access-45vkr" (OuterVolumeSpecName: "kube-api-access-45vkr") pod "e99d30e3-1cee-4f98-ae61-60da1507b58c" (UID: "e99d30e3-1cee-4f98-ae61-60da1507b58c"). InnerVolumeSpecName "kube-api-access-45vkr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.257066    1203 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx5j5\" (UniqueName: \"kubernetes.io/projected/a373c566-4d02-4ea2-9a53-2086521429fd-kube-api-access-lx5j5\") pod \"a373c566-4d02-4ea2-9a53-2086521429fd\" (UID: \"a373c566-4d02-4ea2-9a53-2086521429fd\") "
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.257231    1203 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-45vkr\" (UniqueName: \"kubernetes.io/projected/e99d30e3-1cee-4f98-ae61-60da1507b58c-kube-api-access-45vkr\") on node \"addons-436608\" DevicePath \"\""
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.268570    1203 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a373c566-4d02-4ea2-9a53-2086521429fd-kube-api-access-lx5j5" (OuterVolumeSpecName: "kube-api-access-lx5j5") pod "a373c566-4d02-4ea2-9a53-2086521429fd" (UID: "a373c566-4d02-4ea2-9a53-2086521429fd"). InnerVolumeSpecName "kube-api-access-lx5j5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.346052    1203 scope.go:117] "RemoveContainer" containerID="e4a3b97e8fdaed775fafe1b8be836fdef1c4c11cc295eec7fbf51ff8e5f6f3f6"
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.363005    1203 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lx5j5\" (UniqueName: \"kubernetes.io/projected/a373c566-4d02-4ea2-9a53-2086521429fd-kube-api-access-lx5j5\") on node \"addons-436608\" DevicePath \"\""
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.363436    1203 scope.go:117] "RemoveContainer" containerID="e4a3b97e8fdaed775fafe1b8be836fdef1c4c11cc295eec7fbf51ff8e5f6f3f6"
	Sep 12 18:24:55 addons-436608 kubelet[1203]: E0912 18:24:55.364103    1203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4a3b97e8fdaed775fafe1b8be836fdef1c4c11cc295eec7fbf51ff8e5f6f3f6\": not found" containerID="e4a3b97e8fdaed775fafe1b8be836fdef1c4c11cc295eec7fbf51ff8e5f6f3f6"
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.364157    1203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4a3b97e8fdaed775fafe1b8be836fdef1c4c11cc295eec7fbf51ff8e5f6f3f6"} err="failed to get container status \"e4a3b97e8fdaed775fafe1b8be836fdef1c4c11cc295eec7fbf51ff8e5f6f3f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4a3b97e8fdaed775fafe1b8be836fdef1c4c11cc295eec7fbf51ff8e5f6f3f6\": not found"
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.364170    1203 scope.go:117] "RemoveContainer" containerID="b5a9a08a802a721ba761e567ff6f738f974f8e5e90dbfacadb0e4c90e7857757"
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.376877    1203 scope.go:117] "RemoveContainer" containerID="b5a9a08a802a721ba761e567ff6f738f974f8e5e90dbfacadb0e4c90e7857757"
	Sep 12 18:24:55 addons-436608 kubelet[1203]: E0912 18:24:55.377848    1203 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5a9a08a802a721ba761e567ff6f738f974f8e5e90dbfacadb0e4c90e7857757\": not found" containerID="b5a9a08a802a721ba761e567ff6f738f974f8e5e90dbfacadb0e4c90e7857757"
	Sep 12 18:24:55 addons-436608 kubelet[1203]: I0912 18:24:55.377880    1203 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5a9a08a802a721ba761e567ff6f738f974f8e5e90dbfacadb0e4c90e7857757"} err="failed to get container status \"b5a9a08a802a721ba761e567ff6f738f974f8e5e90dbfacadb0e4c90e7857757\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5a9a08a802a721ba761e567ff6f738f974f8e5e90dbfacadb0e4c90e7857757\": not found"
	Sep 12 18:24:56 addons-436608 kubelet[1203]: I0912 18:24:56.009532    1203 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a373c566-4d02-4ea2-9a53-2086521429fd" path="/var/lib/kubelet/pods/a373c566-4d02-4ea2-9a53-2086521429fd/volumes"
	Sep 12 18:24:56 addons-436608 kubelet[1203]: I0912 18:24:56.011102    1203 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e99d30e3-1cee-4f98-ae61-60da1507b58c" path="/var/lib/kubelet/pods/e99d30e3-1cee-4f98-ae61-60da1507b58c/volumes"
	
	* 
	* ==> storage-provisioner [1323bb57ed9b5d2e9418027cfb6768a6a0f7fcd7e6842f26695515422f0630bd] <==
	* I0912 18:23:21.952639       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 18:23:21.974621       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 18:23:21.974754       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 18:23:22.002286       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 18:23:22.003589       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-436608_63660a74-eca0-4525-9cd3-fb9bb3ecaf9d!
	I0912 18:23:22.020289       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e3d4ad6e-13d3-4b38-88bf-3bdda08769d0", APIVersion:"v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-436608_63660a74-eca0-4525-9cd3-fb9bb3ecaf9d became leader
	I0912 18:23:22.106688       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-436608_63660a74-eca0-4525-9cd3-fb9bb3ecaf9d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-436608 -n addons-436608
helpers_test.go:261: (dbg) Run:  kubectl --context addons-436608 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod ingress-nginx-admission-create-4c95q ingress-nginx-admission-patch-t697x
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/InspektorGadget]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-436608 describe pod nginx task-pv-pod ingress-nginx-admission-create-4c95q ingress-nginx-admission-patch-t697x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-436608 describe pod nginx task-pv-pod ingress-nginx-admission-create-4c95q ingress-nginx-admission-patch-t697x: exit status 1 (82.856374ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-436608/192.168.39.50
	Start Time:       Tue, 12 Sep 2023 18:24:53 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wdrc (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-8wdrc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/nginx to addons-436608
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-436608/192.168.39.50
	Start Time:       Tue, 12 Sep 2023 18:24:54 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k6qgw (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-k6qgw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/task-pv-pod to addons-436608
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4c95q" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-t697x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-436608 describe pod nginx task-pv-pod ingress-nginx-admission-create-4c95q ingress-nginx-admission-patch-t697x: exit status 1
--- FAIL: TestAddons/parallel/InspektorGadget (7.89s)

                                                
                                    

Test pass (265/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 53.74
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.28.1/json-events 17.77
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.12
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.54
20 TestOffline 102.69
22 TestAddons/Setup 144.64
24 TestAddons/parallel/Registry 16.73
25 TestAddons/parallel/Ingress 22.19
27 TestAddons/parallel/MetricsServer 6
28 TestAddons/parallel/HelmTiller 13.44
30 TestAddons/parallel/CSI 46.62
31 TestAddons/parallel/Headlamp 15.39
32 TestAddons/parallel/CloudSpanner 5.63
35 TestAddons/serial/GCPAuth/Namespaces 0.12
36 TestAddons/StoppedEnableDisable 92.32
37 TestCertOptions 94.32
38 TestCertExpiration 256.2
40 TestForceSystemdFlag 72.78
41 TestForceSystemdEnv 76.64
43 TestKVMDriverInstallOrUpdate 4.15
47 TestErrorSpam/setup 47.74
48 TestErrorSpam/start 0.32
49 TestErrorSpam/status 0.76
50 TestErrorSpam/pause 1.38
51 TestErrorSpam/unpause 1.43
52 TestErrorSpam/stop 1.34
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 62.16
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 29.44
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.47
64 TestFunctional/serial/CacheCmd/cache/add_local 3.48
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.19
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 41.2
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.15
75 TestFunctional/serial/LogsFileCmd 1.22
76 TestFunctional/serial/InvalidService 4.19
78 TestFunctional/parallel/ConfigCmd 0.28
79 TestFunctional/parallel/DashboardCmd 14.43
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.14
82 TestFunctional/parallel/StatusCmd 0.92
86 TestFunctional/parallel/ServiceCmdConnect 11.69
87 TestFunctional/parallel/AddonsCmd 0.11
88 TestFunctional/parallel/PersistentVolumeClaim 37.12
90 TestFunctional/parallel/SSHCmd 0.39
91 TestFunctional/parallel/CpCmd 0.84
92 TestFunctional/parallel/MySQL 31.33
93 TestFunctional/parallel/FileSync 0.19
94 TestFunctional/parallel/CertSync 1.23
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.37
102 TestFunctional/parallel/License 0.75
104 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
110 TestFunctional/parallel/ImageCommands/ImageBuild 5.46
111 TestFunctional/parallel/ImageCommands/Setup 2.28
119 TestFunctional/parallel/ProfileCmd/profile_list 0.31
120 TestFunctional/parallel/ProfileCmd/profile_json_output 0.24
121 TestFunctional/parallel/ServiceCmd/DeployApp 10.17
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.2
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.31
124 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.26
125 TestFunctional/parallel/ServiceCmd/List 0.24
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.26
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
128 TestFunctional/parallel/ServiceCmd/Format 0.34
129 TestFunctional/parallel/MountCmd/any-port 9.68
130 TestFunctional/parallel/ServiceCmd/URL 0.35
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.21
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.72
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.42
138 TestFunctional/parallel/MountCmd/specific-port 1.79
139 TestFunctional/parallel/MountCmd/VerifyCleanup 1.55
140 TestFunctional/parallel/Version/short 0.04
141 TestFunctional/parallel/Version/components 0.62
142 TestFunctional/delete_addon-resizer_images 0.06
143 TestFunctional/delete_my-image_image 0.01
144 TestFunctional/delete_minikube_cached_images 0.01
148 TestIngressAddonLegacy/StartLegacyK8sCluster 126.85
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.46
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.53
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 35.02
155 TestJSONOutput/start/Command 58.92
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.59
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.54
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 6.37
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.17
183 TestMainNoArgs 0.04
184 TestMinikubeProfile 97.65
187 TestMountStart/serial/StartWithMountFirst 29.99
188 TestMountStart/serial/VerifyMountFirst 0.35
189 TestMountStart/serial/StartWithMountSecond 25.73
190 TestMountStart/serial/VerifyMountSecond 0.35
191 TestMountStart/serial/DeleteFirst 0.85
192 TestMountStart/serial/VerifyMountPostDelete 0.35
193 TestMountStart/serial/Stop 1.1
194 TestMountStart/serial/RestartStopped 23.43
195 TestMountStart/serial/VerifyMountPostStop 0.36
198 TestMultiNode/serial/FreshStart2Nodes 167.46
199 TestMultiNode/serial/DeployApp2Nodes 5.31
200 TestMultiNode/serial/PingHostFrom2Pods 0.8
201 TestMultiNode/serial/AddNode 42.4
202 TestMultiNode/serial/ProfileList 0.21
203 TestMultiNode/serial/CopyFile 7.09
204 TestMultiNode/serial/StopNode 2.14
205 TestMultiNode/serial/StartAfterStop 26.9
206 TestMultiNode/serial/RestartKeepsNodes 309.07
207 TestMultiNode/serial/DeleteNode 1.69
208 TestMultiNode/serial/StopMultiNode 183.01
209 TestMultiNode/serial/RestartMultiNode 95.54
210 TestMultiNode/serial/ValidateNameConflict 50.46
215 TestPreload 317.94
217 TestScheduledStopUnix 118.82
221 TestRunningBinaryUpgrade 216.46
223 TestKubernetesUpgrade 251.07
226 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
227 TestNoKubernetes/serial/StartWithK8s 100.96
228 TestNoKubernetes/serial/StartWithStopK8s 46.47
229 TestStoppedBinaryUpgrade/Setup 2.19
230 TestStoppedBinaryUpgrade/Upgrade 137.35
231 TestNoKubernetes/serial/Start 30.71
232 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
233 TestNoKubernetes/serial/ProfileList 6.57
234 TestNoKubernetes/serial/Stop 1.25
235 TestNoKubernetes/serial/StartNoArgs 53.63
236 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
237 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
252 TestNetworkPlugins/group/false 3.01
257 TestPause/serial/Start 63.16
259 TestStartStop/group/old-k8s-version/serial/FirstStart 156.37
261 TestStartStop/group/no-preload/serial/FirstStart 154.51
262 TestPause/serial/SecondStartNoReconfiguration 18.01
263 TestPause/serial/Pause 0.82
264 TestPause/serial/VerifyStatus 0.27
265 TestPause/serial/Unpause 0.6
266 TestPause/serial/PauseAgain 0.68
267 TestPause/serial/DeletePaused 1.5
268 TestPause/serial/VerifyDeletedResources 0.5
270 TestStartStop/group/embed-certs/serial/FirstStart 74.5
272 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.17
273 TestStartStop/group/old-k8s-version/serial/DeployApp 10.51
274 TestStartStop/group/embed-certs/serial/DeployApp 10.42
275 TestStartStop/group/no-preload/serial/DeployApp 10.48
276 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.97
277 TestStartStop/group/old-k8s-version/serial/Stop 92.44
278 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.31
279 TestStartStop/group/embed-certs/serial/Stop 92.08
280 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 4.18
281 TestStartStop/group/no-preload/serial/Stop 91.52
282 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.36
283 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
284 TestStartStop/group/default-k8s-diff-port/serial/Stop 101.56
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
286 TestStartStop/group/old-k8s-version/serial/SecondStart 457.61
287 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
288 TestStartStop/group/embed-certs/serial/SecondStart 377.73
289 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
290 TestStartStop/group/no-preload/serial/SecondStart 358.05
291 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
292 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 330.2
293 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 22.03
294 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.02
295 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.03
296 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
297 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
298 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
299 TestStartStop/group/no-preload/serial/Pause 2.73
300 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
302 TestStartStop/group/newest-cni/serial/FirstStart 61.27
303 TestStartStop/group/embed-certs/serial/Pause 3.01
304 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
305 TestNetworkPlugins/group/auto/Start 87.25
306 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
307 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.48
308 TestNetworkPlugins/group/kindnet/Start 113.22
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.15
310 TestStartStop/group/newest-cni/serial/DeployApp 0
311 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.46
312 TestStartStop/group/newest-cni/serial/Stop 7.2
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
315 TestStartStop/group/old-k8s-version/serial/Pause 2.47
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/newest-cni/serial/SecondStart 53.39
318 TestNetworkPlugins/group/calico/Start 119.94
319 TestNetworkPlugins/group/auto/KubeletFlags 0.21
320 TestNetworkPlugins/group/auto/NetCatPod 9.41
321 TestNetworkPlugins/group/auto/DNS 0.16
322 TestNetworkPlugins/group/auto/Localhost 0.14
323 TestNetworkPlugins/group/auto/HairPin 0.17
324 TestNetworkPlugins/group/custom-flannel/Start 92.04
325 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
326 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
327 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
328 TestStartStop/group/newest-cni/serial/Pause 2.6
329 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
330 TestNetworkPlugins/group/enable-default-cni/Start 97.6
331 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
332 TestNetworkPlugins/group/kindnet/NetCatPod 13.44
333 TestNetworkPlugins/group/kindnet/DNS 0.17
334 TestNetworkPlugins/group/kindnet/Localhost 0.15
335 TestNetworkPlugins/group/kindnet/HairPin 0.14
336 TestNetworkPlugins/group/flannel/Start 99.65
337 TestNetworkPlugins/group/calico/ControllerPod 5.02
338 TestNetworkPlugins/group/calico/KubeletFlags 0.21
339 TestNetworkPlugins/group/calico/NetCatPod 12.57
340 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.48
341 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.99
342 TestNetworkPlugins/group/calico/DNS 0.21
343 TestNetworkPlugins/group/calico/Localhost 0.21
344 TestNetworkPlugins/group/calico/HairPin 0.16
345 TestNetworkPlugins/group/custom-flannel/DNS 0.21
346 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
347 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
348 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
349 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.54
350 TestNetworkPlugins/group/bridge/Start 63.99
351 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
352 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
353 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
354 TestNetworkPlugins/group/flannel/ControllerPod 5.03
355 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
356 TestNetworkPlugins/group/flannel/NetCatPod 10.35
357 TestNetworkPlugins/group/flannel/DNS 0.19
358 TestNetworkPlugins/group/flannel/Localhost 0.15
359 TestNetworkPlugins/group/flannel/HairPin 0.15
360 TestNetworkPlugins/group/bridge/KubeletFlags 0.67
361 TestNetworkPlugins/group/bridge/NetCatPod 10.34
362 TestNetworkPlugins/group/bridge/DNS 0.15
363 TestNetworkPlugins/group/bridge/Localhost 0.13
364 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.16.0/json-events (53.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-647738 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-647738 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (53.735643612s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (53.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-647738
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-647738: exit status 85 (53.984615ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-647738 | jenkins | v1.31.2 | 12 Sep 23 18:21 UTC |          |
	|         | -p download-only-647738        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 18:21:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 18:21:00.545455   10968 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:21:00.545554   10968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:21:00.545565   10968 out.go:309] Setting ErrFile to fd 2...
	I0912 18:21:00.545571   10968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:21:00.545796   10968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
	W0912 18:21:00.545943   10968 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17233-3774/.minikube/config/config.json: open /home/jenkins/minikube-integration/17233-3774/.minikube/config/config.json: no such file or directory
	I0912 18:21:00.548329   10968 out.go:303] Setting JSON to true
	I0912 18:21:00.549202   10968 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":207,"bootTime":1694542654,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:21:00.549260   10968 start.go:138] virtualization: kvm guest
	I0912 18:21:00.551550   10968 out.go:97] [download-only-647738] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 18:21:00.553080   10968 out.go:169] MINIKUBE_LOCATION=17233
	I0912 18:21:00.551707   10968 notify.go:220] Checking for updates...
	W0912 18:21:00.551717   10968 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17233-3774/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 18:21:00.555970   10968 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:21:00.557467   10968 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	I0912 18:21:00.558903   10968 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	I0912 18:21:00.560386   10968 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0912 18:21:00.562914   10968 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 18:21:00.563144   10968 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 18:21:00.682592   10968 out.go:97] Using the kvm2 driver based on user configuration
	I0912 18:21:00.682623   10968 start.go:298] selected driver: kvm2
	I0912 18:21:00.682629   10968 start.go:902] validating driver "kvm2" against <nil>
	I0912 18:21:00.682917   10968 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:21:00.683029   10968 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17233-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 18:21:00.698202   10968 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0912 18:21:00.698270   10968 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 18:21:00.698695   10968 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0912 18:21:00.698875   10968 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 18:21:00.698910   10968 cni.go:84] Creating CNI manager for ""
	I0912 18:21:00.698922   10968 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0912 18:21:00.698930   10968 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 18:21:00.698938   10968 start_flags.go:321] config:
	{Name:download-only-647738 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-647738 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:21:00.699129   10968 iso.go:125] acquiring lock: {Name:mk65f8eb47eabe5b76d2b188fdc61eff869c985e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:21:00.701006   10968 out.go:97] Downloading VM boot image ...
	I0912 18:21:00.701040   10968 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17233-3774/.minikube/cache/iso/amd64/minikube-v1.31.0-1694081706-17207-amd64.iso
	I0912 18:21:09.472800   10968 out.go:97] Starting control plane node download-only-647738 in cluster download-only-647738
	I0912 18:21:09.472823   10968 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0912 18:21:09.791860   10968 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0912 18:21:09.791883   10968 cache.go:57] Caching tarball of preloaded images
	I0912 18:21:09.792065   10968 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0912 18:21:09.794178   10968 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0912 18:21:09.794210   10968 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0912 18:21:09.899629   10968 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17233-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0912 18:21:23.916209   10968 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0912 18:21:23.916304   10968 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17233-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0912 18:21:24.928251   10968 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0912 18:21:24.928607   10968 profile.go:148] Saving config to /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/download-only-647738/config.json ...
	I0912 18:21:24.928637   10968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/download-only-647738/config.json: {Name:mk3eee399e7c5200e0cbdb6451b4ccb6344ece3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 18:21:24.928791   10968 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0912 18:21:24.928948   10968 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17233-3774/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-647738"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (17.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-647738 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-647738 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (17.774386127s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (17.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-647738
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-647738: exit status 85 (52.233004ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-647738 | jenkins | v1.31.2 | 12 Sep 23 18:21 UTC |          |
	|         | -p download-only-647738        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-647738 | jenkins | v1.31.2 | 12 Sep 23 18:21 UTC |          |
	|         | -p download-only-647738        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 18:21:54
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 18:21:54.337119   11573 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:21:54.337370   11573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:21:54.337380   11573 out.go:309] Setting ErrFile to fd 2...
	I0912 18:21:54.337387   11573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:21:54.337584   11573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
	W0912 18:21:54.337728   11573 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17233-3774/.minikube/config/config.json: open /home/jenkins/minikube-integration/17233-3774/.minikube/config/config.json: no such file or directory
	I0912 18:21:54.338133   11573 out.go:303] Setting JSON to true
	I0912 18:21:54.338923   11573 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":260,"bootTime":1694542654,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:21:54.338978   11573 start.go:138] virtualization: kvm guest
	I0912 18:21:54.341276   11573 out.go:97] [download-only-647738] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 18:21:54.343066   11573 out.go:169] MINIKUBE_LOCATION=17233
	I0912 18:21:54.341450   11573 notify.go:220] Checking for updates...
	I0912 18:21:54.346124   11573 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:21:54.347659   11573 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	I0912 18:21:54.349106   11573 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	I0912 18:21:54.350456   11573 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0912 18:21:54.353005   11573 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 18:21:54.353447   11573 config.go:182] Loaded profile config "download-only-647738": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0912 18:21:54.353490   11573 start.go:810] api.Load failed for download-only-647738: filestore "download-only-647738": Docker machine "download-only-647738" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0912 18:21:54.353566   11573 driver.go:373] Setting default libvirt URI to qemu:///system
	W0912 18:21:54.353597   11573 start.go:810] api.Load failed for download-only-647738: filestore "download-only-647738": Docker machine "download-only-647738" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0912 18:21:54.384784   11573 out.go:97] Using the kvm2 driver based on existing profile
	I0912 18:21:54.384805   11573 start.go:298] selected driver: kvm2
	I0912 18:21:54.384810   11573 start.go:902] validating driver "kvm2" against &{Name:download-only-647738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-647738 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:21:54.385165   11573 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:21:54.385254   11573 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17233-3774/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0912 18:21:54.399693   11573 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0912 18:21:54.400331   11573 cni.go:84] Creating CNI manager for ""
	I0912 18:21:54.400348   11573 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0912 18:21:54.400357   11573 start_flags.go:321] config:
	{Name:download-only-647738 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-647738 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:21:54.400521   11573 iso.go:125] acquiring lock: {Name:mk65f8eb47eabe5b76d2b188fdc61eff869c985e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 18:21:54.402229   11573 out.go:97] Starting control plane node download-only-647738 in cluster download-only-647738
	I0912 18:21:54.402242   11573 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0912 18:21:54.727403   11573 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4
	I0912 18:21:54.727437   11573 cache.go:57] Caching tarball of preloaded images
	I0912 18:21:54.727584   11573 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime containerd
	I0912 18:21:54.729511   11573 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0912 18:21:54.729530   11573 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4 ...
	I0912 18:21:55.206382   11573 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:923ead224190762fa2aa551036672b63 -> /home/jenkins/minikube-integration/17233-3774/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-647738"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-647738
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-547063 --alsologtostderr --binary-mirror http://127.0.0.1:45659 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-547063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-547063
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (102.69s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-759206 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-759206 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m41.511863481s)
helpers_test.go:175: Cleaning up "offline-containerd-759206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-759206
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-759206: (1.179565918s)
--- PASS: TestOffline (102.69s)

                                                
                                    
x
+
TestAddons/Setup (144.64s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-436608 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-436608 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m24.644681426s)
--- PASS: TestAddons/Setup (144.64s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 19.28443ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-bwndt" [e99d30e3-1cee-4f98-ae61-60da1507b58c] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.051645596s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vqw5g" [a373c566-4d02-4ea2-9a53-2086521429fd] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013979444s
addons_test.go:316: (dbg) Run:  kubectl --context addons-436608 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-436608 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-436608 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.734742174s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-436608 ip
2023/09/12 18:24:53 [DEBUG] GET http://192.168.39.50:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-436608 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-436608 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-436608 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-436608 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [620956ed-8225-423b-bcaf-ba968f8e638a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [620956ed-8225-423b-bcaf-ba968f8e638a] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.011836056s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-436608 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-436608 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-436608 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.50
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-436608 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-436608 addons disable ingress-dns --alsologtostderr -v=1: (1.942881849s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-436608 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-436608 addons disable ingress --alsologtostderr -v=1: (7.759857967s)
--- PASS: TestAddons/parallel/Ingress (22.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 19.248055ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-jdzwh" [e53337c4-8bb1-400f-b129-a93c921f4de5] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.051313966s
addons_test.go:391: (dbg) Run:  kubectl --context addons-436608 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-436608 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.44s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 19.183028ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-wfwv2" [d8bd764e-e992-4694-950e-4d96b971fdc4] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.051104823s
addons_test.go:449: (dbg) Run:  kubectl --context addons-436608 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-436608 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.72796889s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-436608 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 6.44976ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-436608 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-436608 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ae629cef-d99e-420a-ba98-1b27cdfa70ee] Pending
helpers_test.go:344: "task-pv-pod" [ae629cef-d99e-420a-ba98-1b27cdfa70ee] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ae629cef-d99e-420a-ba98-1b27cdfa70ee] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.038664973s
addons_test.go:560: (dbg) Run:  kubectl --context addons-436608 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-436608 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-436608 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-436608 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-436608 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-436608 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-436608 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-436608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-436608 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7e73bc5e-52d3-4ade-91c3-67bb9d3be55a] Pending
helpers_test.go:344: "task-pv-pod-restore" [7e73bc5e-52d3-4ade-91c3-67bb9d3be55a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7e73bc5e-52d3-4ade-91c3-67bb9d3be55a] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.013052478s
addons_test.go:602: (dbg) Run:  kubectl --context addons-436608 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-436608 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-436608 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-436608 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-436608 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.749218276s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-436608 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.62s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-436608 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-436608 --alsologtostderr -v=1: (1.364010731s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-2x8cb" [3f89efd7-d1ff-4bba-8859-5e36d0c4faf6] Pending
helpers_test.go:344: "headlamp-699c48fb74-2x8cb" [3f89efd7-d1ff-4bba-8859-5e36d0c4faf6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-2x8cb" [3f89efd7-d1ff-4bba-8859-5e36d0c4faf6] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.020089471s
--- PASS: TestAddons/parallel/Headlamp (15.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-628kt" [7bdaae8f-7e7f-487b-92cc-529ccb3dd4de] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012476437s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-436608
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-436608 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-436608 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-436608
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-436608: (1m32.073722185s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-436608
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-436608
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-436608
--- PASS: TestAddons/StoppedEnableDisable (92.32s)

                                                
                                    
x
+
TestCertOptions (94.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-891635 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-891635 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m32.836364181s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-891635 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-891635 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-891635 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-891635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-891635
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-891635: (1.016644762s)
--- PASS: TestCertOptions (94.32s)

                                                
                                    
x
+
TestCertExpiration (256.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-395683 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-395683 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m9.947404849s)
E0912 19:05:54.537230   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-395683 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-395683 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (5.243572635s)
helpers_test.go:175: Cleaning up "cert-expiration-395683" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-395683
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-395683: (1.003570223s)
--- PASS: TestCertExpiration (256.20s)

                                                
                                    
x
+
TestForceSystemdFlag (72.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-329088 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-329088 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m11.560671532s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-329088 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-329088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-329088
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-329088: (1.020463818s)
--- PASS: TestForceSystemdFlag (72.78s)

                                                
                                    
x
+
TestForceSystemdEnv (76.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-863086 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-863086 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m15.484116985s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-863086 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-863086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-863086
--- PASS: TestForceSystemdEnv (76.64s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.15s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.15s)

                                                
                                    
x
+
TestErrorSpam/setup (47.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-522788 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-522788 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-522788 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-522788 --driver=kvm2  --container-runtime=containerd: (47.740874765s)
--- PASS: TestErrorSpam/setup (47.74s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 stop: (1.211996961s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-522788 --log_dir /tmp/nospam-522788 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17233-3774/.minikube/files/etc/test/nested/copy/10956/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233107 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-233107 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m2.155057177s)
--- PASS: TestFunctional/serial/StartWithProxy (62.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233107 --alsologtostderr -v=8
E0912 18:29:37.673021   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:29:37.678943   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:29:37.689217   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:29:37.709491   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:29:37.749817   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:29:37.830132   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:29:37.990592   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:29:38.311300   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:29:38.951674   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:29:40.231998   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:29:42.792721   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:29:47.913734   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-233107 --alsologtostderr -v=8: (29.442654965s)
functional_test.go:659: soft start took 29.443322003s for "functional-233107" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-233107 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 cache add registry.k8s.io/pause:3.1: (1.472931279s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 cache add registry.k8s.io/pause:3.3
E0912 18:29:58.154059   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 cache add registry.k8s.io/pause:3.3: (1.530252905s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 cache add registry.k8s.io/pause:latest: (1.464572837s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-233107 /tmp/TestFunctionalserialCacheCmdcacheadd_local2655930248/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 cache add minikube-local-cache-test:functional-233107
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 cache add minikube-local-cache-test:functional-233107: (3.172919021s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 cache delete minikube-local-cache-test:functional-233107
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-233107
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233107 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (201.8551ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 cache reload: (1.555637479s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 kubectl -- --context functional-233107 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-233107 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233107 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0912 18:30:18.635165   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-233107 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.20321167s)
functional_test.go:757: restart took 41.203305611s for "functional-233107" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-233107 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 logs: (1.147230347s)
--- PASS: TestFunctional/serial/LogsCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 logs --file /tmp/TestFunctionalserialLogsFileCmd3204318922/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 logs --file /tmp/TestFunctionalserialLogsFileCmd3204318922/001/logs.txt: (1.21855754s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-233107 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-233107
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-233107: exit status 115 (282.556696ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.111:32260 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-233107 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233107 config get cpus: exit status 14 (47.511494ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233107 config get cpus: exit status 14 (40.5688ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-233107 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-233107 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 17432: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233107 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-233107 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (149.65072ms)

                                                
                                                
-- stdout --
	* [functional-233107] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 18:31:08.201847   17130 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:31:08.201958   17130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:31:08.201962   17130 out.go:309] Setting ErrFile to fd 2...
	I0912 18:31:08.201967   17130 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:31:08.202146   17130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
	I0912 18:31:08.202661   17130 out.go:303] Setting JSON to false
	I0912 18:31:08.203720   17130 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":814,"bootTime":1694542654,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:31:08.203781   17130 start.go:138] virtualization: kvm guest
	I0912 18:31:08.206360   17130 out.go:177] * [functional-233107] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 18:31:08.207782   17130 out.go:177]   - MINIKUBE_LOCATION=17233
	I0912 18:31:08.207789   17130 notify.go:220] Checking for updates...
	I0912 18:31:08.209195   17130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:31:08.210626   17130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	I0912 18:31:08.212126   17130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	I0912 18:31:08.216774   17130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 18:31:08.218283   17130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 18:31:08.220238   17130 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0912 18:31:08.220847   17130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:31:08.220939   17130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:31:08.237207   17130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0912 18:31:08.237687   17130 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:31:08.238201   17130 main.go:141] libmachine: Using API Version  1
	I0912 18:31:08.238218   17130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:31:08.238497   17130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:31:08.238671   17130 main.go:141] libmachine: (functional-233107) Calling .DriverName
	I0912 18:31:08.238908   17130 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 18:31:08.239179   17130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:31:08.239224   17130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:31:08.257715   17130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34891
	I0912 18:31:08.258125   17130 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:31:08.258569   17130 main.go:141] libmachine: Using API Version  1
	I0912 18:31:08.258592   17130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:31:08.258900   17130 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:31:08.259131   17130 main.go:141] libmachine: (functional-233107) Calling .DriverName
	I0912 18:31:08.294755   17130 out.go:177] * Using the kvm2 driver based on existing profile
	I0912 18:31:08.296177   17130 start.go:298] selected driver: kvm2
	I0912 18:31:08.296196   17130 start.go:902] validating driver "kvm2" against &{Name:functional-233107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-233107 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.111 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:31:08.296349   17130 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 18:31:08.299035   17130 out.go:177] 
	W0912 18:31:08.300554   17130 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 18:31:08.302060   17130 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233107 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-233107 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-233107 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (141.184609ms)

                                                
                                                
-- stdout --
	* [functional-233107] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 18:31:06.588040   16773 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:31:06.588158   16773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:31:06.588168   16773 out.go:309] Setting ErrFile to fd 2...
	I0912 18:31:06.588177   16773 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:31:06.588526   16773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
	I0912 18:31:06.589090   16773 out.go:303] Setting JSON to false
	I0912 18:31:06.591731   16773 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":813,"bootTime":1694542654,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 18:31:06.591815   16773 start.go:138] virtualization: kvm guest
	I0912 18:31:06.594015   16773 out.go:177] * [functional-233107] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0912 18:31:06.595590   16773 out.go:177]   - MINIKUBE_LOCATION=17233
	I0912 18:31:06.595575   16773 notify.go:220] Checking for updates...
	I0912 18:31:06.597057   16773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 18:31:06.598465   16773 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	I0912 18:31:06.599952   16773 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	I0912 18:31:06.601362   16773 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 18:31:06.602871   16773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 18:31:06.604818   16773 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0912 18:31:06.605285   16773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:31:06.605337   16773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:31:06.621116   16773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I0912 18:31:06.621720   16773 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:31:06.622548   16773 main.go:141] libmachine: Using API Version  1
	I0912 18:31:06.622574   16773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:31:06.622876   16773 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:31:06.623039   16773 main.go:141] libmachine: (functional-233107) Calling .DriverName
	I0912 18:31:06.623295   16773 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 18:31:06.623707   16773 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:31:06.623751   16773 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:31:06.638782   16773 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45491
	I0912 18:31:06.639192   16773 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:31:06.639703   16773 main.go:141] libmachine: Using API Version  1
	I0912 18:31:06.639737   16773 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:31:06.640048   16773 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:31:06.640231   16773 main.go:141] libmachine: (functional-233107) Calling .DriverName
	I0912 18:31:06.675266   16773 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0912 18:31:06.676720   16773 start.go:298] selected driver: kvm2
	I0912 18:31:06.676737   16773 start.go:902] validating driver "kvm2" against &{Name:functional-233107 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17207/minikube-v1.31.0-1694081706-17207-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-233107 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.111 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 18:31:06.676868   16773 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 18:31:06.679508   16773 out.go:177] 
	W0912 18:31:06.680908   16773 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 18:31:06.682192   16773 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-233107 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-233107 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-swtkd" [dda06dd8-1901-4a75-aa3d-ebf50ce22047] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-swtkd" [dda06dd8-1901-4a75-aa3d-ebf50ce22047] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.011657104s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.111:31657
functional_test.go:1674: http://192.168.50.111:31657: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-swtkd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.111:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.111:31657
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5a8887d4-57e5-4398-8677-8392e22d5b3e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.022355851s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-233107 get storageclass -o=json
E0912 18:30:59.595343   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-233107 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-233107 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-233107 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8ef83688-8511-4d43-89b9-c59e78fbe41a] Pending
helpers_test.go:344: "sp-pod" [8ef83688-8511-4d43-89b9-c59e78fbe41a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8ef83688-8511-4d43-89b9-c59e78fbe41a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.020397177s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-233107 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-233107 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-233107 delete -f testdata/storage-provisioner/pod.yaml: (1.084970157s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-233107 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2e15a8ff-9e5a-478e-9ebd-a4820bac545f] Pending
helpers_test.go:344: "sp-pod" [2e15a8ff-9e5a-478e-9ebd-a4820bac545f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2e15a8ff-9e5a-478e-9ebd-a4820bac545f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.025905621s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-233107 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh -n functional-233107 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 cp functional-233107:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd662648326/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh -n functional-233107 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-233107 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-fhcxb" [a50b8548-9121-4247-978b-97088851af24] Pending
helpers_test.go:344: "mysql-859648c796-fhcxb" [a50b8548-9121-4247-978b-97088851af24] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-fhcxb" [a50b8548-9121-4247-978b-97088851af24] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.029953188s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-233107 exec mysql-859648c796-fhcxb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-233107 exec mysql-859648c796-fhcxb -- mysql -ppassword -e "show databases;": exit status 1 (152.529762ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-233107 exec mysql-859648c796-fhcxb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-233107 exec mysql-859648c796-fhcxb -- mysql -ppassword -e "show databases;": exit status 1 (192.64649ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-233107 exec mysql-859648c796-fhcxb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-233107 exec mysql-859648c796-fhcxb -- mysql -ppassword -e "show databases;": exit status 1 (180.876119ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-233107 exec mysql-859648c796-fhcxb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10956/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "sudo cat /etc/test/nested/copy/10956/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10956.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "sudo cat /etc/ssl/certs/10956.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10956.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "sudo cat /usr/share/ca-certificates/10956.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/109562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "sudo cat /etc/ssl/certs/109562.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/109562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "sudo cat /usr/share/ca-certificates/109562.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-233107 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233107 ssh "sudo systemctl is-active docker": exit status 1 (186.005387ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233107 ssh "sudo systemctl is-active crio": exit status 1 (187.372731ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-233107 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-233107
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-233107
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-233107 image ls --format short --alsologtostderr:
I0912 18:31:22.534557   18507 out.go:296] Setting OutFile to fd 1 ...
I0912 18:31:22.534772   18507 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:31:22.534780   18507 out.go:309] Setting ErrFile to fd 2...
I0912 18:31:22.534785   18507 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:31:22.534962   18507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
I0912 18:31:22.535497   18507 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0912 18:31:22.535591   18507 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0912 18:31:22.535931   18507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 18:31:22.535972   18507 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:31:22.550146   18507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36217
I0912 18:31:22.550554   18507 main.go:141] libmachine: () Calling .GetVersion
I0912 18:31:22.551040   18507 main.go:141] libmachine: Using API Version  1
I0912 18:31:22.551064   18507 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:31:22.551402   18507 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:31:22.551603   18507 main.go:141] libmachine: (functional-233107) Calling .GetState
I0912 18:31:22.553394   18507 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 18:31:22.553429   18507 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:31:22.567174   18507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34847
I0912 18:31:22.567538   18507 main.go:141] libmachine: () Calling .GetVersion
I0912 18:31:22.567954   18507 main.go:141] libmachine: Using API Version  1
I0912 18:31:22.568001   18507 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:31:22.568340   18507 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:31:22.568534   18507 main.go:141] libmachine: (functional-233107) Calling .DriverName
I0912 18:31:22.568721   18507 ssh_runner.go:195] Run: systemctl --version
I0912 18:31:22.568744   18507 main.go:141] libmachine: (functional-233107) Calling .GetSSHHostname
I0912 18:31:22.571351   18507 main.go:141] libmachine: (functional-233107) DBG | domain functional-233107 has defined MAC address 52:54:00:1e:d7:0d in network mk-functional-233107
I0912 18:31:22.571707   18507 main.go:141] libmachine: (functional-233107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:d7:0d", ip: ""} in network mk-functional-233107: {Iface:virbr1 ExpiryTime:2023-09-12 19:28:39 +0000 UTC Type:0 Mac:52:54:00:1e:d7:0d Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:functional-233107 Clientid:01:52:54:00:1e:d7:0d}
I0912 18:31:22.571742   18507 main.go:141] libmachine: (functional-233107) DBG | domain functional-233107 has defined IP address 192.168.50.111 and MAC address 52:54:00:1e:d7:0d in network mk-functional-233107
I0912 18:31:22.571820   18507 main.go:141] libmachine: (functional-233107) Calling .GetSSHPort
I0912 18:31:22.571971   18507 main.go:141] libmachine: (functional-233107) Calling .GetSSHKeyPath
I0912 18:31:22.572099   18507 main.go:141] libmachine: (functional-233107) Calling .GetSSHUsername
I0912 18:31:22.572210   18507 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/functional-233107/id_rsa Username:docker}
I0912 18:31:22.661987   18507 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 18:31:22.692009   18507 main.go:141] libmachine: Making call to close driver server
I0912 18:31:22.692024   18507 main.go:141] libmachine: (functional-233107) Calling .Close
I0912 18:31:22.692273   18507 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:31:22.692295   18507 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:31:22.692304   18507 main.go:141] libmachine: Making call to close driver server
I0912 18:31:22.692313   18507 main.go:141] libmachine: (functional-233107) Calling .Close
I0912 18:31:22.692319   18507 main.go:141] libmachine: (functional-233107) DBG | Closing plugin on server side
I0912 18:31:22.692577   18507 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:31:22.692598   18507 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:31:22.692598   18507 main.go:141] libmachine: (functional-233107) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-233107 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.1            | sha256:6cdbab | 24.6MB |
| docker.io/library/minikube-local-cache-test | functional-233107  | sha256:415f02 | 1.01kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| docker.io/library/nginx                     | latest             | sha256:f5a6b2 | 70.5MB |
| gcr.io/google-containers/addon-resizer      | functional-233107  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/kube-scheduler              | v1.28.1            | sha256:b462ce | 18.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-apiserver              | v1.28.1            | sha256:5c8012 | 34.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.1            | sha256:821b3d | 33.4MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b0b1fa | 27.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-233107 image ls --format table --alsologtostderr:
I0912 18:31:23.163181   18578 out.go:296] Setting OutFile to fd 1 ...
I0912 18:31:23.163291   18578 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:31:23.163300   18578 out.go:309] Setting ErrFile to fd 2...
I0912 18:31:23.163305   18578 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:31:23.163487   18578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
I0912 18:31:23.164034   18578 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0912 18:31:23.164131   18578 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0912 18:31:23.164475   18578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 18:31:23.164519   18578 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:31:23.178607   18578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
I0912 18:31:23.179069   18578 main.go:141] libmachine: () Calling .GetVersion
I0912 18:31:23.179645   18578 main.go:141] libmachine: Using API Version  1
I0912 18:31:23.179673   18578 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:31:23.180007   18578 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:31:23.180213   18578 main.go:141] libmachine: (functional-233107) Calling .GetState
I0912 18:31:23.182849   18578 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 18:31:23.182896   18578 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:31:23.197739   18578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42959
I0912 18:31:23.198165   18578 main.go:141] libmachine: () Calling .GetVersion
I0912 18:31:23.198732   18578 main.go:141] libmachine: Using API Version  1
I0912 18:31:23.198759   18578 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:31:23.199175   18578 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:31:23.199347   18578 main.go:141] libmachine: (functional-233107) Calling .DriverName
I0912 18:31:23.199556   18578 ssh_runner.go:195] Run: systemctl --version
I0912 18:31:23.199584   18578 main.go:141] libmachine: (functional-233107) Calling .GetSSHHostname
I0912 18:31:23.202260   18578 main.go:141] libmachine: (functional-233107) DBG | domain functional-233107 has defined MAC address 52:54:00:1e:d7:0d in network mk-functional-233107
I0912 18:31:23.202651   18578 main.go:141] libmachine: (functional-233107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:d7:0d", ip: ""} in network mk-functional-233107: {Iface:virbr1 ExpiryTime:2023-09-12 19:28:39 +0000 UTC Type:0 Mac:52:54:00:1e:d7:0d Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:functional-233107 Clientid:01:52:54:00:1e:d7:0d}
I0912 18:31:23.202696   18578 main.go:141] libmachine: (functional-233107) DBG | domain functional-233107 has defined IP address 192.168.50.111 and MAC address 52:54:00:1e:d7:0d in network mk-functional-233107
I0912 18:31:23.202805   18578 main.go:141] libmachine: (functional-233107) Calling .GetSSHPort
I0912 18:31:23.203005   18578 main.go:141] libmachine: (functional-233107) Calling .GetSSHKeyPath
I0912 18:31:23.203146   18578 main.go:141] libmachine: (functional-233107) Calling .GetSSHUsername
I0912 18:31:23.203298   18578 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/functional-233107/id_rsa Username:docker}
I0912 18:31:23.298274   18578 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 18:31:23.331741   18578 main.go:141] libmachine: Making call to close driver server
I0912 18:31:23.331758   18578 main.go:141] libmachine: (functional-233107) Calling .Close
I0912 18:31:23.332032   18578 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:31:23.332077   18578 main.go:141] libmachine: (functional-233107) DBG | Closing plugin on server side
I0912 18:31:23.332090   18578 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:31:23.332108   18578 main.go:141] libmachine: Making call to close driver server
I0912 18:31:23.332123   18578 main.go:141] libmachine: (functional-233107) Calling .Close
I0912 18:31:23.332328   18578 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:31:23.332343   18578 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-233107 image ls --format json --alsologtostderr:
[{"id":"sha256:415f0239d9cc2a94a9dc890755cd64e99d0a16e57a47bf523ee0f9b56ca7d57a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-233107"],"size":"1006"},{"id":"sha256:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","repoDigests":["registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"34617463"},{"id":"sha256:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"18802390"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@
sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"27731571"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-233107"],"size":"10823156"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","repoDigests":["registry.k8s.io/kube-con
troller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"33396106"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9","repoDigests":["docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153"],"repoTags":["docker.io/library/nginx:latest"],"size":"70479554"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repo
Digests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","repoDigests":["registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"24555014"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:
e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-233107 image ls --format json --alsologtostderr:
I0912 18:31:22.938589   18554 out.go:296] Setting OutFile to fd 1 ...
I0912 18:31:22.938817   18554 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:31:22.938825   18554 out.go:309] Setting ErrFile to fd 2...
I0912 18:31:22.938830   18554 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:31:22.938985   18554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
I0912 18:31:22.939521   18554 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0912 18:31:22.939614   18554 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0912 18:31:22.939972   18554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 18:31:22.940013   18554 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:31:22.954027   18554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41619
I0912 18:31:22.954447   18554 main.go:141] libmachine: () Calling .GetVersion
I0912 18:31:22.955021   18554 main.go:141] libmachine: Using API Version  1
I0912 18:31:22.955042   18554 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:31:22.955415   18554 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:31:22.955620   18554 main.go:141] libmachine: (functional-233107) Calling .GetState
I0912 18:31:22.957519   18554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 18:31:22.957572   18554 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:31:22.971533   18554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44101
I0912 18:31:22.971923   18554 main.go:141] libmachine: () Calling .GetVersion
I0912 18:31:22.972440   18554 main.go:141] libmachine: Using API Version  1
I0912 18:31:22.972469   18554 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:31:22.972857   18554 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:31:22.973036   18554 main.go:141] libmachine: (functional-233107) Calling .DriverName
I0912 18:31:22.973229   18554 ssh_runner.go:195] Run: systemctl --version
I0912 18:31:22.973251   18554 main.go:141] libmachine: (functional-233107) Calling .GetSSHHostname
I0912 18:31:22.975776   18554 main.go:141] libmachine: (functional-233107) DBG | domain functional-233107 has defined MAC address 52:54:00:1e:d7:0d in network mk-functional-233107
I0912 18:31:22.976144   18554 main.go:141] libmachine: (functional-233107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:d7:0d", ip: ""} in network mk-functional-233107: {Iface:virbr1 ExpiryTime:2023-09-12 19:28:39 +0000 UTC Type:0 Mac:52:54:00:1e:d7:0d Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:functional-233107 Clientid:01:52:54:00:1e:d7:0d}
I0912 18:31:22.976181   18554 main.go:141] libmachine: (functional-233107) DBG | domain functional-233107 has defined IP address 192.168.50.111 and MAC address 52:54:00:1e:d7:0d in network mk-functional-233107
I0912 18:31:22.976244   18554 main.go:141] libmachine: (functional-233107) Calling .GetSSHPort
I0912 18:31:22.976426   18554 main.go:141] libmachine: (functional-233107) Calling .GetSSHKeyPath
I0912 18:31:22.976547   18554 main.go:141] libmachine: (functional-233107) Calling .GetSSHUsername
I0912 18:31:22.976682   18554 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/functional-233107/id_rsa Username:docker}
I0912 18:31:23.071343   18554 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 18:31:23.116229   18554 main.go:141] libmachine: Making call to close driver server
I0912 18:31:23.116243   18554 main.go:141] libmachine: (functional-233107) Calling .Close
I0912 18:31:23.116564   18554 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:31:23.116586   18554 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:31:23.116621   18554 main.go:141] libmachine: (functional-233107) DBG | Closing plugin on server side
I0912 18:31:23.116677   18554 main.go:141] libmachine: Making call to close driver server
I0912 18:31:23.116693   18554 main.go:141] libmachine: (functional-233107) Calling .Close
I0912 18:31:23.116932   18554 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:31:23.116946   18554 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:31:23.117010   18554 main.go:141] libmachine: (functional-233107) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-233107 image ls --format yaml --alsologtostderr:
- id: sha256:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "34617463"
- id: sha256:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "24555014"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9
repoDigests:
- docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153
repoTags:
- docker.io/library/nginx:latest
size: "70479554"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:415f0239d9cc2a94a9dc890755cd64e99d0a16e57a47bf523ee0f9b56ca7d57a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-233107
size: "1006"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "18802390"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "27731571"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-233107
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "33396106"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-233107 image ls --format yaml --alsologtostderr:
I0912 18:31:22.739592   18531 out.go:296] Setting OutFile to fd 1 ...
I0912 18:31:22.739868   18531 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:31:22.739882   18531 out.go:309] Setting ErrFile to fd 2...
I0912 18:31:22.739899   18531 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:31:22.740084   18531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
I0912 18:31:22.740788   18531 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0912 18:31:22.740899   18531 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0912 18:31:22.741278   18531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 18:31:22.741340   18531 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:31:22.755377   18531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33531
I0912 18:31:22.755804   18531 main.go:141] libmachine: () Calling .GetVersion
I0912 18:31:22.756301   18531 main.go:141] libmachine: Using API Version  1
I0912 18:31:22.756322   18531 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:31:22.756665   18531 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:31:22.756842   18531 main.go:141] libmachine: (functional-233107) Calling .GetState
I0912 18:31:22.758759   18531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 18:31:22.758802   18531 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:31:22.772428   18531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
I0912 18:31:22.772895   18531 main.go:141] libmachine: () Calling .GetVersion
I0912 18:31:22.773350   18531 main.go:141] libmachine: Using API Version  1
I0912 18:31:22.773373   18531 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:31:22.773669   18531 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:31:22.773833   18531 main.go:141] libmachine: (functional-233107) Calling .DriverName
I0912 18:31:22.774019   18531 ssh_runner.go:195] Run: systemctl --version
I0912 18:31:22.774041   18531 main.go:141] libmachine: (functional-233107) Calling .GetSSHHostname
I0912 18:31:22.776435   18531 main.go:141] libmachine: (functional-233107) DBG | domain functional-233107 has defined MAC address 52:54:00:1e:d7:0d in network mk-functional-233107
I0912 18:31:22.776795   18531 main.go:141] libmachine: (functional-233107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:d7:0d", ip: ""} in network mk-functional-233107: {Iface:virbr1 ExpiryTime:2023-09-12 19:28:39 +0000 UTC Type:0 Mac:52:54:00:1e:d7:0d Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:functional-233107 Clientid:01:52:54:00:1e:d7:0d}
I0912 18:31:22.776826   18531 main.go:141] libmachine: (functional-233107) DBG | domain functional-233107 has defined IP address 192.168.50.111 and MAC address 52:54:00:1e:d7:0d in network mk-functional-233107
I0912 18:31:22.776943   18531 main.go:141] libmachine: (functional-233107) Calling .GetSSHPort
I0912 18:31:22.777104   18531 main.go:141] libmachine: (functional-233107) Calling .GetSSHKeyPath
I0912 18:31:22.777266   18531 main.go:141] libmachine: (functional-233107) Calling .GetSSHUsername
I0912 18:31:22.777414   18531 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/functional-233107/id_rsa Username:docker}
I0912 18:31:22.865935   18531 ssh_runner.go:195] Run: sudo crictl images --output json
I0912 18:31:22.894944   18531 main.go:141] libmachine: Making call to close driver server
I0912 18:31:22.894956   18531 main.go:141] libmachine: (functional-233107) Calling .Close
I0912 18:31:22.895226   18531 main.go:141] libmachine: (functional-233107) DBG | Closing plugin on server side
I0912 18:31:22.895236   18531 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:31:22.895252   18531 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:31:22.895268   18531 main.go:141] libmachine: Making call to close driver server
I0912 18:31:22.895278   18531 main.go:141] libmachine: (functional-233107) Calling .Close
I0912 18:31:22.895488   18531 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:31:22.895505   18531 main.go:141] libmachine: (functional-233107) DBG | Closing plugin on server side
I0912 18:31:22.895513   18531 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233107 ssh pgrep buildkitd: exit status 1 (182.607789ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image build -t localhost/my-image:functional-233107 testdata/build --alsologtostderr
2023/09/12 18:31:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 image build -t localhost/my-image:functional-233107 testdata/build --alsologtostderr: (5.050290335s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-233107 image build -t localhost/my-image:functional-233107 testdata/build --alsologtostderr:
I0912 18:31:23.557227   18631 out.go:296] Setting OutFile to fd 1 ...
I0912 18:31:23.557352   18631 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:31:23.557360   18631 out.go:309] Setting ErrFile to fd 2...
I0912 18:31:23.557364   18631 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 18:31:23.557515   18631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
I0912 18:31:23.558058   18631 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0912 18:31:23.558476   18631 config.go:182] Loaded profile config "functional-233107": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
I0912 18:31:23.558833   18631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 18:31:23.558866   18631 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:31:23.572835   18631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41133
I0912 18:31:23.573287   18631 main.go:141] libmachine: () Calling .GetVersion
I0912 18:31:23.573833   18631 main.go:141] libmachine: Using API Version  1
I0912 18:31:23.573853   18631 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:31:23.574229   18631 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:31:23.574411   18631 main.go:141] libmachine: (functional-233107) Calling .GetState
I0912 18:31:23.576209   18631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0912 18:31:23.576255   18631 main.go:141] libmachine: Launching plugin server for driver kvm2
I0912 18:31:23.589875   18631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
I0912 18:31:23.590227   18631 main.go:141] libmachine: () Calling .GetVersion
I0912 18:31:23.590683   18631 main.go:141] libmachine: Using API Version  1
I0912 18:31:23.590721   18631 main.go:141] libmachine: () Calling .SetConfigRaw
I0912 18:31:23.591036   18631 main.go:141] libmachine: () Calling .GetMachineName
I0912 18:31:23.591215   18631 main.go:141] libmachine: (functional-233107) Calling .DriverName
I0912 18:31:23.591408   18631 ssh_runner.go:195] Run: systemctl --version
I0912 18:31:23.591440   18631 main.go:141] libmachine: (functional-233107) Calling .GetSSHHostname
I0912 18:31:23.593986   18631 main.go:141] libmachine: (functional-233107) DBG | domain functional-233107 has defined MAC address 52:54:00:1e:d7:0d in network mk-functional-233107
I0912 18:31:23.594379   18631 main.go:141] libmachine: (functional-233107) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:d7:0d", ip: ""} in network mk-functional-233107: {Iface:virbr1 ExpiryTime:2023-09-12 19:28:39 +0000 UTC Type:0 Mac:52:54:00:1e:d7:0d Iaid: IPaddr:192.168.50.111 Prefix:24 Hostname:functional-233107 Clientid:01:52:54:00:1e:d7:0d}
I0912 18:31:23.594416   18631 main.go:141] libmachine: (functional-233107) DBG | domain functional-233107 has defined IP address 192.168.50.111 and MAC address 52:54:00:1e:d7:0d in network mk-functional-233107
I0912 18:31:23.594522   18631 main.go:141] libmachine: (functional-233107) Calling .GetSSHPort
I0912 18:31:23.594677   18631 main.go:141] libmachine: (functional-233107) Calling .GetSSHKeyPath
I0912 18:31:23.594828   18631 main.go:141] libmachine: (functional-233107) Calling .GetSSHUsername
I0912 18:31:23.594983   18631 sshutil.go:53] new ssh client: &{IP:192.168.50.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/functional-233107/id_rsa Username:docker}
I0912 18:31:23.682347   18631 build_images.go:151] Building image from path: /tmp/build.1417945214.tar
I0912 18:31:23.682427   18631 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0912 18:31:23.691344   18631 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1417945214.tar
I0912 18:31:23.696394   18631 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1417945214.tar: stat -c "%s %y" /var/lib/minikube/build/build.1417945214.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1417945214.tar': No such file or directory
I0912 18:31:23.696419   18631 ssh_runner.go:362] scp /tmp/build.1417945214.tar --> /var/lib/minikube/build/build.1417945214.tar (3072 bytes)
I0912 18:31:23.721148   18631 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1417945214
I0912 18:31:23.745037   18631 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1417945214 -xf /var/lib/minikube/build/build.1417945214.tar
I0912 18:31:23.753470   18631 containerd.go:378] Building image: /var/lib/minikube/build/build.1417945214
I0912 18:31:23.753530   18631 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1417945214 --local dockerfile=/var/lib/minikube/build/build.1417945214 --output type=image,name=localhost/my-image:functional-233107
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.7s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:e7576576bc9f27fe5fa28b81075435bd6049f8a06e12f2a298a2970edc62d3ab
#8 exporting manifest sha256:e7576576bc9f27fe5fa28b81075435bd6049f8a06e12f2a298a2970edc62d3ab 0.0s done
#8 exporting config sha256:43f1356aa74bc0a12cd5160a9f81dff0055c98124b102e895a9f907e37f741fa 0.0s done
#8 naming to localhost/my-image:functional-233107 done
#8 DONE 0.3s
I0912 18:31:28.536926   18631 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1417945214 --local dockerfile=/var/lib/minikube/build/build.1417945214 --output type=image,name=localhost/my-image:functional-233107: (4.783363831s)
I0912 18:31:28.537011   18631 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1417945214
I0912 18:31:28.552165   18631 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1417945214.tar
I0912 18:31:28.563980   18631 build_images.go:207] Built localhost/my-image:functional-233107 from /tmp/build.1417945214.tar
I0912 18:31:28.564021   18631 build_images.go:123] succeeded building to: functional-233107
I0912 18:31:28.564025   18631 build_images.go:124] failed building to: 
I0912 18:31:28.564055   18631 main.go:141] libmachine: Making call to close driver server
I0912 18:31:28.564072   18631 main.go:141] libmachine: (functional-233107) Calling .Close
I0912 18:31:28.564342   18631 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:31:28.564361   18631 main.go:141] libmachine: Making call to close connection to plugin binary
I0912 18:31:28.564381   18631 main.go:141] libmachine: Making call to close driver server
I0912 18:31:28.564391   18631 main.go:141] libmachine: (functional-233107) Calling .Close
I0912 18:31:28.564629   18631 main.go:141] libmachine: Successfully made call to close driver server
I0912 18:31:28.564645   18631 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.25607892s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-233107
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "273.891663ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "40.925333ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "199.79683ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "43.587514ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-233107 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-233107 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-bgwqf" [5584732b-26fa-4905-b936-fd999617ff01] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-bgwqf" [5584732b-26fa-4905-b936-fd999617ff01] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.016947661s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image load --daemon gcr.io/google-containers/addon-resizer:functional-233107 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 image load --daemon gcr.io/google-containers/addon-resizer:functional-233107 --alsologtostderr: (3.988365499s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image load --daemon gcr.io/google-containers/addon-resizer:functional-233107 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 image load --daemon gcr.io/google-containers/addon-resizer:functional-233107 --alsologtostderr: (4.107208765s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.257715423s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-233107
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image load --daemon gcr.io/google-containers/addon-resizer:functional-233107 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 image load --daemon gcr.io/google-containers/addon-resizer:functional-233107 --alsologtostderr: (4.678608146s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 service list -o json
functional_test.go:1493: Took "256.013402ms" to run "out/minikube-linux-amd64 -p functional-233107 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.111:31299
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-233107 /tmp/TestFunctionalparallelMountCmdany-port3201079594/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694543466686941000" to /tmp/TestFunctionalparallelMountCmdany-port3201079594/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694543466686941000" to /tmp/TestFunctionalparallelMountCmdany-port3201079594/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694543466686941000" to /tmp/TestFunctionalparallelMountCmdany-port3201079594/001/test-1694543466686941000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233107 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.270267ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 12 18:31 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 12 18:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 12 18:31 test-1694543466686941000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh cat /mount-9p/test-1694543466686941000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-233107 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5543a8a5-0577-417a-b57e-223f818260ab] Pending
helpers_test.go:344: "busybox-mount" [5543a8a5-0577-417a-b57e-223f818260ab] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5543a8a5-0577-417a-b57e-223f818260ab] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5543a8a5-0577-417a-b57e-223f818260ab] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.030267194s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-233107 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233107 /tmp/TestFunctionalparallelMountCmdany-port3201079594/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.111:31299
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image save gcr.io/google-containers/addon-resizer:functional-233107 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 image save gcr.io/google-containers/addon-resizer:functional-233107 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.211676209s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image rm gcr.io/google-containers/addon-resizer:functional-233107 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.467540059s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-233107
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 image save --daemon gcr.io/google-containers/addon-resizer:functional-233107 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-233107 image save --daemon gcr.io/google-containers/addon-resizer:functional-233107 --alsologtostderr: (1.381349614s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-233107
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-233107 /tmp/TestFunctionalparallelMountCmdspecific-port197296952/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233107 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (252.584825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233107 /tmp/TestFunctionalparallelMountCmdspecific-port197296952/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233107 ssh "sudo umount -f /mount-9p": exit status 1 (235.008103ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-233107 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233107 /tmp/TestFunctionalparallelMountCmdspecific-port197296952/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-233107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3761848471/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-233107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3761848471/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-233107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3761848471/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-233107 ssh "findmnt -T" /mount1: exit status 1 (245.10764ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-233107 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3761848471/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3761848471/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-233107 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3761848471/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-233107 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-233107
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-233107
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-233107
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (126.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-486408 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0912 18:32:21.516189   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-486408 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (2m6.845776804s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (126.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-486408 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-486408 addons enable ingress --alsologtostderr -v=5: (11.454951734s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-486408 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (35.02s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-486408 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-486408 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.435976575s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-486408 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-486408 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [26a17b39-290e-4db8-9ca8-33c8c30b8643] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [26a17b39-290e-4db8-9ca8-33c8c30b8643] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.020457582s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-486408 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-486408 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-486408 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.226
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-486408 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-486408 addons disable ingress-dns --alsologtostderr -v=1: (2.770162648s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-486408 addons disable ingress --alsologtostderr -v=1
E0912 18:34:37.672538   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-486408 addons disable ingress --alsologtostderr -v=1: (7.639091502s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (35.02s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.92s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-397583 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0912 18:35:05.356495   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-397583 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (58.916241713s)
--- PASS: TestJSONOutput/start/Command (58.92s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-397583 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-397583 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-397583 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-397583 --output=json --user=testUser: (6.369304774s)
--- PASS: TestJSONOutput/stop/Command (6.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.17s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-057111 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-057111 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.605379ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"08913252-2a50-476e-9c72-f0b80e205aa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-057111] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"74dc5d5a-0ee6-4e86-bdbd-83d5f40e759a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17233"}}
	{"specversion":"1.0","id":"df769026-c233-47f0-bf3f-f11cf82861d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7f4957e0-b660-4292-9de0-2baf450f2cee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig"}}
	{"specversion":"1.0","id":"18212a49-2366-4d51-8570-753772f9afb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube"}}
	{"specversion":"1.0","id":"379fa32a-8b51-47d5-b586-c6fe7be5ced4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c3f3c077-ab6b-41b4-8e40-a89acc3eab41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"99acc547-447e-410a-9399-afebfa380b9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-057111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-057111
--- PASS: TestErrorJSONOutput (0.17s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (97.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-221648 --driver=kvm2  --container-runtime=containerd
E0912 18:35:54.537018   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:35:54.542272   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:35:54.552520   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:35:54.572766   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:35:54.613038   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:35:54.693362   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:35:54.853808   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:35:55.174392   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:35:55.815463   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:35:57.095982   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:35:59.656907   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:36:04.777264   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:36:15.018288   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:36:35.498487   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-221648 --driver=kvm2  --container-runtime=containerd: (47.532343383s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-224012 --driver=kvm2  --container-runtime=containerd
E0912 18:37:16.458736   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-224012 --driver=kvm2  --container-runtime=containerd: (47.38300504s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-221648
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-224012
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-224012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-224012
helpers_test.go:175: Cleaning up "first-221648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-221648
--- PASS: TestMinikubeProfile (97.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-340605 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-340605 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.989366408s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-340605 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-340605 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-357461 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-357461 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (24.727629083s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357461 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357461 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-340605 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357461 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357461 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.1s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-357461
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-357461: (1.09697808s)
--- PASS: TestMountStart/serial/Stop (1.10s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-357461
E0912 18:38:38.379071   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-357461: (22.425357592s)
--- PASS: TestMountStart/serial/RestartStopped (23.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357461 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-357461 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (167.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-252046 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0912 18:39:08.794038   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:08.799330   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:08.809568   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:08.829800   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:08.870090   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:08.950400   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:09.110844   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:09.431418   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:10.072356   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:11.353387   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:13.915181   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:19.036443   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:29.276684   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:39:37.672889   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:39:49.757104   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:40:30.717293   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:40:54.537261   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:41:22.220094   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-252046 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m47.042119641s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (167.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-252046 -- rollout status deployment/busybox: (3.717004833s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- exec busybox-5bc68d56bd-6qx57 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- exec busybox-5bc68d56bd-t8dlg -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- exec busybox-5bc68d56bd-6qx57 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- exec busybox-5bc68d56bd-t8dlg -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- exec busybox-5bc68d56bd-6qx57 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- exec busybox-5bc68d56bd-t8dlg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- exec busybox-5bc68d56bd-6qx57 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- exec busybox-5bc68d56bd-6qx57 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- exec busybox-5bc68d56bd-t8dlg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-252046 -- exec busybox-5bc68d56bd-t8dlg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-252046 -v 3 --alsologtostderr
E0912 18:41:52.638167   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-252046 -v 3 --alsologtostderr: (41.820986459s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.40s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp testdata/cp-test.txt multinode-252046:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp multinode-252046:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile324645088/001/cp-test_multinode-252046.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp multinode-252046:/home/docker/cp-test.txt multinode-252046-m02:/home/docker/cp-test_multinode-252046_multinode-252046-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m02 "sudo cat /home/docker/cp-test_multinode-252046_multinode-252046-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp multinode-252046:/home/docker/cp-test.txt multinode-252046-m03:/home/docker/cp-test_multinode-252046_multinode-252046-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m03 "sudo cat /home/docker/cp-test_multinode-252046_multinode-252046-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp testdata/cp-test.txt multinode-252046-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp multinode-252046-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile324645088/001/cp-test_multinode-252046-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp multinode-252046-m02:/home/docker/cp-test.txt multinode-252046:/home/docker/cp-test_multinode-252046-m02_multinode-252046.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046 "sudo cat /home/docker/cp-test_multinode-252046-m02_multinode-252046.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp multinode-252046-m02:/home/docker/cp-test.txt multinode-252046-m03:/home/docker/cp-test_multinode-252046-m02_multinode-252046-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m03 "sudo cat /home/docker/cp-test_multinode-252046-m02_multinode-252046-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp testdata/cp-test.txt multinode-252046-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp multinode-252046-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile324645088/001/cp-test_multinode-252046-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp multinode-252046-m03:/home/docker/cp-test.txt multinode-252046:/home/docker/cp-test_multinode-252046-m03_multinode-252046.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046 "sudo cat /home/docker/cp-test_multinode-252046-m03_multinode-252046.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 cp multinode-252046-m03:/home/docker/cp-test.txt multinode-252046-m02:/home/docker/cp-test_multinode-252046-m03_multinode-252046-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 ssh -n multinode-252046-m02 "sudo cat /home/docker/cp-test_multinode-252046-m03_multinode-252046-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-252046 node stop m03: (1.256882409s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-252046 status: exit status 7 (447.020728ms)

                                                
                                                
-- stdout --
	multinode-252046
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-252046-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-252046-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-252046 status --alsologtostderr: exit status 7 (432.342536ms)

                                                
                                                
-- stdout --
	multinode-252046
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-252046-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-252046-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 18:42:38.063510   25323 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:42:38.063757   25323 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:42:38.063765   25323 out.go:309] Setting ErrFile to fd 2...
	I0912 18:42:38.063770   25323 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:42:38.063960   25323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
	I0912 18:42:38.064121   25323 out.go:303] Setting JSON to false
	I0912 18:42:38.064154   25323 mustload.go:65] Loading cluster: multinode-252046
	I0912 18:42:38.064217   25323 notify.go:220] Checking for updates...
	I0912 18:42:38.064527   25323 config.go:182] Loaded profile config "multinode-252046": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0912 18:42:38.064542   25323 status.go:255] checking status of multinode-252046 ...
	I0912 18:42:38.064888   25323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:42:38.064944   25323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:38.084137   25323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0912 18:42:38.084553   25323 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:38.085054   25323 main.go:141] libmachine: Using API Version  1
	I0912 18:42:38.085074   25323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:38.085485   25323 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:38.085697   25323 main.go:141] libmachine: (multinode-252046) Calling .GetState
	I0912 18:42:38.087352   25323 status.go:330] multinode-252046 host status = "Running" (err=<nil>)
	I0912 18:42:38.087370   25323 host.go:66] Checking if "multinode-252046" exists ...
	I0912 18:42:38.088242   25323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:42:38.088320   25323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:38.103189   25323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32797
	I0912 18:42:38.103593   25323 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:38.103965   25323 main.go:141] libmachine: Using API Version  1
	I0912 18:42:38.103985   25323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:38.104324   25323 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:38.104520   25323 main.go:141] libmachine: (multinode-252046) Calling .GetIP
	I0912 18:42:38.107161   25323 main.go:141] libmachine: (multinode-252046) DBG | domain multinode-252046 has defined MAC address 52:54:00:09:a2:9e in network mk-multinode-252046
	I0912 18:42:38.107594   25323 main.go:141] libmachine: (multinode-252046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:a2:9e", ip: ""} in network mk-multinode-252046: {Iface:virbr1 ExpiryTime:2023-09-12 19:39:08 +0000 UTC Type:0 Mac:52:54:00:09:a2:9e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:multinode-252046 Clientid:01:52:54:00:09:a2:9e}
	I0912 18:42:38.107643   25323 main.go:141] libmachine: (multinode-252046) DBG | domain multinode-252046 has defined IP address 192.168.39.91 and MAC address 52:54:00:09:a2:9e in network mk-multinode-252046
	I0912 18:42:38.107767   25323 host.go:66] Checking if "multinode-252046" exists ...
	I0912 18:42:38.108049   25323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:42:38.108087   25323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:38.122283   25323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39197
	I0912 18:42:38.122635   25323 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:38.123044   25323 main.go:141] libmachine: Using API Version  1
	I0912 18:42:38.123067   25323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:38.123370   25323 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:38.123535   25323 main.go:141] libmachine: (multinode-252046) Calling .DriverName
	I0912 18:42:38.123752   25323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 18:42:38.123773   25323 main.go:141] libmachine: (multinode-252046) Calling .GetSSHHostname
	I0912 18:42:38.126691   25323 main.go:141] libmachine: (multinode-252046) DBG | domain multinode-252046 has defined MAC address 52:54:00:09:a2:9e in network mk-multinode-252046
	I0912 18:42:38.127093   25323 main.go:141] libmachine: (multinode-252046) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:a2:9e", ip: ""} in network mk-multinode-252046: {Iface:virbr1 ExpiryTime:2023-09-12 19:39:08 +0000 UTC Type:0 Mac:52:54:00:09:a2:9e Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:multinode-252046 Clientid:01:52:54:00:09:a2:9e}
	I0912 18:42:38.127125   25323 main.go:141] libmachine: (multinode-252046) DBG | domain multinode-252046 has defined IP address 192.168.39.91 and MAC address 52:54:00:09:a2:9e in network mk-multinode-252046
	I0912 18:42:38.127314   25323 main.go:141] libmachine: (multinode-252046) Calling .GetSSHPort
	I0912 18:42:38.127458   25323 main.go:141] libmachine: (multinode-252046) Calling .GetSSHKeyPath
	I0912 18:42:38.127615   25323 main.go:141] libmachine: (multinode-252046) Calling .GetSSHUsername
	I0912 18:42:38.127728   25323 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/multinode-252046/id_rsa Username:docker}
	I0912 18:42:38.215620   25323 ssh_runner.go:195] Run: systemctl --version
	I0912 18:42:38.221493   25323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:42:38.235175   25323 kubeconfig.go:92] found "multinode-252046" server: "https://192.168.39.91:8443"
	I0912 18:42:38.235200   25323 api_server.go:166] Checking apiserver status ...
	I0912 18:42:38.235228   25323 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 18:42:38.246998   25323 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1061/cgroup
	I0912 18:42:38.256384   25323 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod50b0338fcdbb2d5eba8e3369cdb313e5/2842d1ad40333f3fc5379950a4510b471a7ee186bd205243dca1e23d9b16c8be"
	I0912 18:42:38.256447   25323 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod50b0338fcdbb2d5eba8e3369cdb313e5/2842d1ad40333f3fc5379950a4510b471a7ee186bd205243dca1e23d9b16c8be/freezer.state
	I0912 18:42:38.268741   25323 api_server.go:204] freezer state: "THAWED"
	I0912 18:42:38.268778   25323 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I0912 18:42:38.274675   25323 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I0912 18:42:38.274701   25323 status.go:421] multinode-252046 apiserver status = Running (err=<nil>)
	I0912 18:42:38.274713   25323 status.go:257] multinode-252046 status: &{Name:multinode-252046 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 18:42:38.274734   25323 status.go:255] checking status of multinode-252046-m02 ...
	I0912 18:42:38.275011   25323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:42:38.275054   25323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:38.289550   25323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
	I0912 18:42:38.289954   25323 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:38.290381   25323 main.go:141] libmachine: Using API Version  1
	I0912 18:42:38.290405   25323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:38.290757   25323 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:38.290921   25323 main.go:141] libmachine: (multinode-252046-m02) Calling .GetState
	I0912 18:42:38.292558   25323 status.go:330] multinode-252046-m02 host status = "Running" (err=<nil>)
	I0912 18:42:38.292575   25323 host.go:66] Checking if "multinode-252046-m02" exists ...
	I0912 18:42:38.292950   25323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:42:38.292996   25323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:38.308588   25323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I0912 18:42:38.308957   25323 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:38.309374   25323 main.go:141] libmachine: Using API Version  1
	I0912 18:42:38.309403   25323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:38.309701   25323 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:38.309888   25323 main.go:141] libmachine: (multinode-252046-m02) Calling .GetIP
	I0912 18:42:38.312620   25323 main.go:141] libmachine: (multinode-252046-m02) DBG | domain multinode-252046-m02 has defined MAC address 52:54:00:1c:2c:37 in network mk-multinode-252046
	I0912 18:42:38.313064   25323 main.go:141] libmachine: (multinode-252046-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:37", ip: ""} in network mk-multinode-252046: {Iface:virbr1 ExpiryTime:2023-09-12 19:40:16 +0000 UTC Type:0 Mac:52:54:00:1c:2c:37 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-252046-m02 Clientid:01:52:54:00:1c:2c:37}
	I0912 18:42:38.313095   25323 main.go:141] libmachine: (multinode-252046-m02) DBG | domain multinode-252046-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:1c:2c:37 in network mk-multinode-252046
	I0912 18:42:38.313257   25323 host.go:66] Checking if "multinode-252046-m02" exists ...
	I0912 18:42:38.313658   25323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:42:38.313704   25323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:38.327493   25323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40307
	I0912 18:42:38.327917   25323 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:38.328437   25323 main.go:141] libmachine: Using API Version  1
	I0912 18:42:38.328464   25323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:38.328758   25323 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:38.328934   25323 main.go:141] libmachine: (multinode-252046-m02) Calling .DriverName
	I0912 18:42:38.329122   25323 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 18:42:38.329142   25323 main.go:141] libmachine: (multinode-252046-m02) Calling .GetSSHHostname
	I0912 18:42:38.332927   25323 main.go:141] libmachine: (multinode-252046-m02) DBG | domain multinode-252046-m02 has defined MAC address 52:54:00:1c:2c:37 in network mk-multinode-252046
	I0912 18:42:38.332964   25323 main.go:141] libmachine: (multinode-252046-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:2c:37", ip: ""} in network mk-multinode-252046: {Iface:virbr1 ExpiryTime:2023-09-12 19:40:16 +0000 UTC Type:0 Mac:52:54:00:1c:2c:37 Iaid: IPaddr:192.168.39.152 Prefix:24 Hostname:multinode-252046-m02 Clientid:01:52:54:00:1c:2c:37}
	I0912 18:42:38.333001   25323 main.go:141] libmachine: (multinode-252046-m02) DBG | domain multinode-252046-m02 has defined IP address 192.168.39.152 and MAC address 52:54:00:1c:2c:37 in network mk-multinode-252046
	I0912 18:42:38.333145   25323 main.go:141] libmachine: (multinode-252046-m02) Calling .GetSSHPort
	I0912 18:42:38.333712   25323 main.go:141] libmachine: (multinode-252046-m02) Calling .GetSSHKeyPath
	I0912 18:42:38.333940   25323 main.go:141] libmachine: (multinode-252046-m02) Calling .GetSSHUsername
	I0912 18:42:38.334102   25323 sshutil.go:53] new ssh client: &{IP:192.168.39.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17233-3774/.minikube/machines/multinode-252046-m02/id_rsa Username:docker}
	I0912 18:42:38.426688   25323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 18:42:38.438627   25323 status.go:257] multinode-252046-m02 status: &{Name:multinode-252046-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0912 18:42:38.438656   25323 status.go:255] checking status of multinode-252046-m03 ...
	I0912 18:42:38.439003   25323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:42:38.439040   25323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:42:38.453806   25323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0912 18:42:38.454210   25323 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:42:38.454646   25323 main.go:141] libmachine: Using API Version  1
	I0912 18:42:38.454666   25323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:42:38.454995   25323 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:42:38.455180   25323 main.go:141] libmachine: (multinode-252046-m03) Calling .GetState
	I0912 18:42:38.456793   25323 status.go:330] multinode-252046-m03 host status = "Stopped" (err=<nil>)
	I0912 18:42:38.456805   25323 status.go:343] host is not running, skipping remaining checks
	I0912 18:42:38.456810   25323 status.go:257] multinode-252046-m03 status: &{Name:multinode-252046-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-252046 node start m03 --alsologtostderr: (26.248139208s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (309.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-252046
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-252046
E0912 18:44:08.794703   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:44:36.479363   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:44:37.672759   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:45:54.536797   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 18:46:00.716749   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-252046: (3m4.527784706s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-252046 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-252046 --wait=true -v=8 --alsologtostderr: (2m4.46913152s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-252046
--- PASS: TestMultiNode/serial/RestartKeepsNodes (309.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-252046 node delete m03: (1.1544692s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 stop
E0912 18:49:08.794223   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:49:37.672956   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:50:54.536600   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-252046 stop: (3m2.854475162s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-252046 status: exit status 7 (84.170323ms)

                                                
                                                
-- stdout --
	multinode-252046
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-252046-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-252046 status --alsologtostderr: exit status 7 (75.558434ms)

                                                
                                                
-- stdout --
	multinode-252046
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-252046-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 18:51:19.099215   27446 out.go:296] Setting OutFile to fd 1 ...
	I0912 18:51:19.099449   27446 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:51:19.099458   27446 out.go:309] Setting ErrFile to fd 2...
	I0912 18:51:19.099463   27446 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 18:51:19.099673   27446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
	I0912 18:51:19.099863   27446 out.go:303] Setting JSON to false
	I0912 18:51:19.099894   27446 mustload.go:65] Loading cluster: multinode-252046
	I0912 18:51:19.099934   27446 notify.go:220] Checking for updates...
	I0912 18:51:19.100320   27446 config.go:182] Loaded profile config "multinode-252046": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0912 18:51:19.100334   27446 status.go:255] checking status of multinode-252046 ...
	I0912 18:51:19.100732   27446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:51:19.100794   27446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:51:19.117148   27446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39403
	I0912 18:51:19.117536   27446 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:51:19.118049   27446 main.go:141] libmachine: Using API Version  1
	I0912 18:51:19.118077   27446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:51:19.118384   27446 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:51:19.118567   27446 main.go:141] libmachine: (multinode-252046) Calling .GetState
	I0912 18:51:19.120010   27446 status.go:330] multinode-252046 host status = "Stopped" (err=<nil>)
	I0912 18:51:19.120028   27446 status.go:343] host is not running, skipping remaining checks
	I0912 18:51:19.120033   27446 status.go:257] multinode-252046 status: &{Name:multinode-252046 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 18:51:19.120048   27446 status.go:255] checking status of multinode-252046-m02 ...
	I0912 18:51:19.120327   27446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0912 18:51:19.120364   27446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0912 18:51:19.134092   27446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36263
	I0912 18:51:19.134470   27446 main.go:141] libmachine: () Calling .GetVersion
	I0912 18:51:19.134878   27446 main.go:141] libmachine: Using API Version  1
	I0912 18:51:19.134900   27446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0912 18:51:19.135133   27446 main.go:141] libmachine: () Calling .GetMachineName
	I0912 18:51:19.135275   27446 main.go:141] libmachine: (multinode-252046-m02) Calling .GetState
	I0912 18:51:19.137096   27446 status.go:330] multinode-252046-m02 host status = "Stopped" (err=<nil>)
	I0912 18:51:19.137118   27446 status.go:343] host is not running, skipping remaining checks
	I0912 18:51:19.137127   27446 status.go:257] multinode-252046-m02 status: &{Name:multinode-252046-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (95.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-252046 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0912 18:52:17.580537   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-252046 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m35.026728218s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-252046 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (95.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-252046
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-252046-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-252046-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (56.532585ms)

                                                
                                                
-- stdout --
	* [multinode-252046-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-252046-m02' is duplicated with machine name 'multinode-252046-m02' in profile 'multinode-252046'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-252046-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-252046-m03 --driver=kvm2  --container-runtime=containerd: (49.199279958s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-252046
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-252046: exit status 80 (214.915733ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-252046
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-252046-m03 already exists in multinode-252046-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-252046-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.46s)

                                                
                                    
x
+
TestPreload (317.94s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-708529 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0912 18:54:08.794596   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:54:37.674918   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 18:55:31.840444   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:55:54.537545   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-708529 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m19.403392335s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-708529 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-708529 image pull gcr.io/k8s-minikube/busybox: (2.286602115s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-708529
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-708529: (1m42.333734536s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-708529 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-708529 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m12.668692371s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-708529 image list
helpers_test.go:175: Cleaning up "test-preload-708529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-708529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-708529: (1.054489161s)
--- PASS: TestPreload (317.94s)

                                                
                                    
x
+
TestScheduledStopUnix (118.82s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-628121 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0912 18:59:08.793769   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 18:59:37.673085   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-628121 --memory=2048 --driver=kvm2  --container-runtime=containerd: (47.268857599s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-628121 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-628121 -n scheduled-stop-628121
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-628121 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-628121 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-628121 -n scheduled-stop-628121
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-628121
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-628121 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0912 19:00:54.536824   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-628121
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-628121: exit status 7 (65.267046ms)

                                                
                                                
-- stdout --
	scheduled-stop-628121
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-628121 -n scheduled-stop-628121
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-628121 -n scheduled-stop-628121: exit status 7 (56.074329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-628121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-628121
--- PASS: TestScheduledStopUnix (118.82s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (216.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.1297961689.exe start -p running-upgrade-786133 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.1297961689.exe start -p running-upgrade-786133 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m11.59575597s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-786133 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-786133 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m21.385136861s)
helpers_test.go:175: Cleaning up "running-upgrade-786133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-786133
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-786133: (1.136497195s)
--- PASS: TestRunningBinaryUpgrade (216.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (251.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-100471 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0912 19:02:40.717353   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-100471 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m33.847141821s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-100471
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-100471: (2.099207637s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-100471 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-100471 status --format={{.Host}}: exit status 7 (81.468861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-100471 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-100471 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m20.693007018s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-100471 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-100471 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-100471 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (75.216503ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-100471] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-100471
	    minikube start -p kubernetes-upgrade-100471 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1004712 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-100471 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-100471 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-100471 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m12.997422543s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-100471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-100471
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-100471: (1.217972793s)
--- PASS: TestKubernetesUpgrade (251.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-776743 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-776743 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (74.575393ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-776743] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (100.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-776743 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-776743 --driver=kvm2  --container-runtime=containerd: (1m40.699084431s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-776743 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (100.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (46.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-776743 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-776743 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (45.254240386s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-776743 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-776743 status -o json: exit status 2 (238.451492ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-776743","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-776743
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (46.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (137.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.2364334177.exe start -p stopped-upgrade-447127 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.2364334177.exe start -p stopped-upgrade-447127 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m18.148120746s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.2364334177.exe -p stopped-upgrade-447127 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.2364334177.exe -p stopped-upgrade-447127 stop: (1.459984105s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-447127 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-447127 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (57.736641489s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (137.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-776743 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-776743 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.708531505s)
--- PASS: TestNoKubernetes/serial/Start (30.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-776743 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-776743 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.793518ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.731420517s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.839656992s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-776743
E0912 19:04:08.793925   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-776743: (1.246910964s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (53.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-776743 --driver=kvm2  --container-runtime=containerd
E0912 19:04:37.673327   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-776743 --driver=kvm2  --container-runtime=containerd: (53.633453507s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (53.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-776743 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-776743 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.531791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-447127
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-585035 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-585035 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (100.329958ms)

                                                
                                                
-- stdout --
	* [false-585035] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17233
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 19:06:26.720548   35859 out.go:296] Setting OutFile to fd 1 ...
	I0912 19:06:26.720669   35859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 19:06:26.720678   35859 out.go:309] Setting ErrFile to fd 2...
	I0912 19:06:26.720682   35859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 19:06:26.720836   35859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17233-3774/.minikube/bin
	I0912 19:06:26.721402   35859 out.go:303] Setting JSON to false
	I0912 19:06:26.722404   35859 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2933,"bootTime":1694542654,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 19:06:26.722481   35859 start.go:138] virtualization: kvm guest
	I0912 19:06:26.724712   35859 out.go:177] * [false-585035] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 19:06:26.726055   35859 out.go:177]   - MINIKUBE_LOCATION=17233
	I0912 19:06:26.726061   35859 notify.go:220] Checking for updates...
	I0912 19:06:26.727496   35859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 19:06:26.728863   35859 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17233-3774/kubeconfig
	I0912 19:06:26.730418   35859 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17233-3774/.minikube
	I0912 19:06:26.731764   35859 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 19:06:26.733221   35859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 19:06:26.735137   35859 config.go:182] Loaded profile config "cert-expiration-395683": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0912 19:06:26.735282   35859 config.go:182] Loaded profile config "cert-options-891635": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0912 19:06:26.735406   35859 config.go:182] Loaded profile config "kubernetes-upgrade-100471": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.1
	I0912 19:06:26.735505   35859 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 19:06:26.773833   35859 out.go:177] * Using the kvm2 driver based on user configuration
	I0912 19:06:26.775246   35859 start.go:298] selected driver: kvm2
	I0912 19:06:26.775264   35859 start.go:902] validating driver "kvm2" against <nil>
	I0912 19:06:26.775279   35859 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 19:06:26.777638   35859 out.go:177] 
	W0912 19:06:26.779042   35859 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0912 19:06:26.780335   35859 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-585035 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-585035" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-585035" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17233-3774/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:05:48 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.72.88:8443
name: cert-expiration-395683
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17233-3774/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:05:15 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.50.61:8443
name: kubernetes-upgrade-100471
contexts:
- context:
cluster: cert-expiration-395683
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:05:48 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-395683
name: cert-expiration-395683
- context:
cluster: kubernetes-upgrade-100471
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:05:15 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: kubernetes-upgrade-100471
name: kubernetes-upgrade-100471
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-395683
user:
client-certificate: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/cert-expiration-395683/client.crt
client-key: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/cert-expiration-395683/client.key
- name: kubernetes-upgrade-100471
user:
client-certificate: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/kubernetes-upgrade-100471/client.crt
client-key: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/kubernetes-upgrade-100471/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-585035

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-585035"

                                                
                                                
----------------------- debugLogs end: false-585035 [took: 2.748859901s] --------------------------------
helpers_test.go:175: Cleaning up "false-585035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-585035
--- PASS: TestNetworkPlugins/group/false (3.01s)

                                                
                                    
x
+
TestPause/serial/Start (63.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-394673 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-394673 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m3.157135384s)
--- PASS: TestPause/serial/Start (63.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (156.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-108965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-108965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m36.370806462s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (156.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (154.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-337947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-337947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (2m34.505400743s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (154.51s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (18.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-394673 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-394673 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (17.998773893s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (18.01s)

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-394673 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-394673 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-394673 --output=json --layout=cluster: exit status 2 (271.945987ms)

                                                
                                                
-- stdout --
	{"Name":"pause-394673","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-394673","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.6s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-394673 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.60s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.68s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-394673 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.68s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-394673 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-394673 --alsologtostderr -v=5: (1.50105943s)
--- PASS: TestPause/serial/DeletePaused (1.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-868848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-868848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (1m14.503118709s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-321766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
E0912 19:08:57.581614   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
E0912 19:09:08.794627   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-321766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (1m3.17382953s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-108965 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cd9404fe-eb7f-4d10-8d56-7d93ea2e1ad5] Pending
helpers_test.go:344: "busybox" [cd9404fe-eb7f-4d10-8d56-7d93ea2e1ad5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cd9404fe-eb7f-4d10-8d56-7d93ea2e1ad5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.028688978s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-108965 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-868848 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5971bf2b-07d3-483b-be36-62e3adfde21f] Pending
helpers_test.go:344: "busybox" [5971bf2b-07d3-483b-be36-62e3adfde21f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5971bf2b-07d3-483b-be36-62e3adfde21f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.024657592s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-868848 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-337947 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [15d10809-f5da-4240-aba8-aa7314e4cd60] Pending
helpers_test.go:344: "busybox" [15d10809-f5da-4240-aba8-aa7314e4cd60] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [15d10809-f5da-4240-aba8-aa7314e4cd60] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.035402535s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-337947 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-108965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-108965 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-108965 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-108965 --alsologtostderr -v=3: (1m32.444751803s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-868848 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-868848 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.230706082s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-868848 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-868848 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-868848 --alsologtostderr -v=3: (1m32.07609537s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-337947 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-337947 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.098008459s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-337947 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-337947 --alsologtostderr -v=3
E0912 19:09:37.672528   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-337947 --alsologtostderr -v=3: (1m31.518117127s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-321766 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9e67a55d-029d-4640-9e87-55059d7a6335] Pending
helpers_test.go:344: "busybox" [9e67a55d-029d-4640-9e87-55059d7a6335] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9e67a55d-029d-4640-9e87-55059d7a6335] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.021541474s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-321766 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-321766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-321766 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (101.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-321766 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-321766 --alsologtostderr -v=3: (1m41.562932562s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (101.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108965 -n old-k8s-version-108965
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108965 -n old-k8s-version-108965: exit status 7 (63.210569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-108965 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (457.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-108965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0912 19:10:54.537128   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-108965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m37.268111517s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-108965 -n old-k8s-version-108965
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (457.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-868848 -n embed-certs-868848
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-868848 -n embed-certs-868848: exit status 7 (63.222126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-868848 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (377.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-868848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-868848 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (6m17.318529907s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-868848 -n embed-certs-868848
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (377.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-337947 -n no-preload-337947
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-337947 -n no-preload-337947: exit status 7 (59.894109ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-337947 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (358.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-337947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-337947 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (5m57.728041285s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-337947 -n no-preload-337947
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (358.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321766 -n default-k8s-diff-port-321766
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321766 -n default-k8s-diff-port-321766: exit status 7 (64.053463ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-321766 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (330.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-321766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
E0912 19:12:11.841337   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 19:14:08.794080   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 19:14:37.672774   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
E0912 19:15:54.536904   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-321766 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (5m29.878721062s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-321766 -n default-k8s-diff-port-321766
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (330.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (22.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z9mxx" [e21f8bda-fcde-43a0-a85a-46e7ad3e3bfb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z9mxx" [e21f8bda-fcde-43a0-a85a-46e7ad3e3bfb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 22.025754796s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (22.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x9prh" [f32f5404-83aa-4f64-b36b-fe3e2322e31c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x9prh" [f32f5404-83aa-4f64-b36b-fe3e2322e31c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.019993837s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6b4h8" [262c7fff-cb16-4aa6-90d8-698434453462] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6b4h8" [262c7fff-cb16-4aa6-90d8-698434453462] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.023692132s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-z9mxx" [e21f8bda-fcde-43a0-a85a-46e7ad3e3bfb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012227022s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-337947 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x9prh" [f32f5404-83aa-4f64-b36b-fe3e2322e31c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01287043s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-868848 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-337947 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-337947 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-337947 -n no-preload-337947
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-337947 -n no-preload-337947: exit status 2 (265.328379ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-337947 -n no-preload-337947
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-337947 -n no-preload-337947: exit status 2 (307.09859ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-337947 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-337947 -n no-preload-337947
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-337947 -n no-preload-337947
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-868848 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-073796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-073796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (1m1.267258306s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-868848 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-868848 -n embed-certs-868848
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-868848 -n embed-certs-868848: exit status 2 (285.621405ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-868848 -n embed-certs-868848
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-868848 -n embed-certs-868848: exit status 2 (262.182155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-868848 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-868848 -n embed-certs-868848
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-868848 -n embed-certs-868848
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6b4h8" [262c7fff-cb16-4aa6-90d8-698434453462] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.022556942s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-321766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m27.254244921s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-321766 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-321766 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-321766 -n default-k8s-diff-port-321766
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-321766 -n default-k8s-diff-port-321766: exit status 2 (257.060239ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-321766 -n default-k8s-diff-port-321766
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-321766 -n default-k8s-diff-port-321766: exit status 2 (261.581698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-321766 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-321766 -n default-k8s-diff-port-321766
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-321766 -n default-k8s-diff-port-321766
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (113.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m53.223344898s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (113.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mfpwv" [23c3b81f-dcc4-45fa-a3ca-4d4cb39d2f35] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.144403376s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-073796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-073796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.462834389s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-073796 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-073796 --alsologtostderr -v=3: (7.202256836s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-mfpwv" [23c3b81f-dcc4-45fa-a3ca-4d4cb39d2f35] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.037876764s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-108965 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-108965 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-108965 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108965 -n old-k8s-version-108965
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108965 -n old-k8s-version-108965: exit status 2 (263.600298ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-108965 -n old-k8s-version-108965
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-108965 -n old-k8s-version-108965: exit status 2 (246.160637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-108965 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-108965 -n old-k8s-version-108965
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-108965 -n old-k8s-version-108965
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-073796 -n newest-cni-073796
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-073796 -n newest-cni-073796: exit status 7 (74.386554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-073796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (53.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-073796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-073796 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.1: (53.126387789s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-073796 -n newest-cni-073796
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (53.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (119.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m59.941711577s)
--- PASS: TestNetworkPlugins/group/calico/Start (119.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-585035 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-585035 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8txq7" [c3c45915-eb24-4098-9879-af0955ea5339] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8txq7" [c3c45915-eb24-4098-9879-af0955ea5339] Running
E0912 19:19:08.794164   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/ingress-addon-legacy-486408/client.crt: no such file or directory
E0912 19:19:10.045798   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
E0912 19:19:10.051044   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
E0912 19:19:10.061332   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
E0912 19:19:10.082040   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
E0912 19:19:10.122307   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
E0912 19:19:10.203124   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
E0912 19:19:10.363934   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
E0912 19:19:10.684736   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
E0912 19:19:11.325546   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
E0912 19:19:12.606680   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.016683323s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-585035 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0912 19:19:30.529348   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m32.040714612s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-073796 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-073796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-073796 -n newest-cni-073796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-073796 -n newest-cni-073796: exit status 2 (262.315573ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-073796 -n newest-cni-073796
E0912 19:19:37.443534   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/no-preload-337947/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-073796 -n newest-cni-073796: exit status 2 (279.725484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-073796 --alsologtostderr -v=1
E0912 19:19:37.673143   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/addons-436608/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-073796 -n newest-cni-073796
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-073796 -n newest-cni-073796
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-zdcxb" [3a08f791-fe47-4c17-bfa6-2e9336a08fef] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.023287886s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (97.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m37.59495393s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (97.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-585035 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-585035 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lg4pc" [0c86c247-8bdc-4e3b-b09e-e7f94aeae2c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lg4pc" [0c86c247-8bdc-4e3b-b09e-e7f94aeae2c7] Running
E0912 19:19:51.009789   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.01147349s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-585035 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (99.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0912 19:20:20.245610   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/default-k8s-diff-port-321766/client.crt: no such file or directory
E0912 19:20:31.970427   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/old-k8s-version-108965/client.crt: no such file or directory
E0912 19:20:38.884658   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/no-preload-337947/client.crt: no such file or directory
E0912 19:20:40.726720   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/default-k8s-diff-port-321766/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m39.651553272s)
--- PASS: TestNetworkPlugins/group/flannel/Start (99.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dz4rk" [64fd92bb-71f6-41fc-bb42-917dbaf8df6d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.022869688s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-585035 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-585035 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ft4sx" [34347afe-38ba-41eb-a77e-30c4eeeeb826] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 19:20:54.537238   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/functional-233107/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-ft4sx" [34347afe-38ba-41eb-a77e-30c4eeeeb826] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.130767736s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-585035 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-585035 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rtw6x" [f920c5cd-ea92-423d-93be-6a991c4de7ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rtw6x" [f920c5cd-ea92-423d-93be-6a991c4de7ae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.019130624s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-585035 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-585035 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-585035 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-585035 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kdbh7" [800a62ee-371f-4985-a3e5-c765d191f43e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kdbh7" [800a62ee-371f-4985-a3e5-c765d191f43e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.0201172s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
E0912 19:21:21.688495   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/default-k8s-diff-port-321766/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-585035 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m3.984976258s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-585035 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6crk2" [1c9f2706-10a9-4567-b79a-308de0d43932] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.026316289s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-585035 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-585035 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-45tqq" [8f70c5e3-2f47-4c85-a028-98dc3418a722] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 19:22:00.805309   10956 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/no-preload-337947/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-45tqq" [8f70c5e3-2f47-4c85-a028-98dc3418a722] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.009430168s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-585035 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-585035 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-585035 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9gm4w" [fcd47997-2b65-40eb-9d8f-84bfd04d781c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9gm4w" [fcd47997-2b65-40eb-9d8f-84bfd04d781c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.009320911s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-585035 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-585035 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (36/302)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.1/cached-images 0
13 TestDownloadOnly/v1.28.1/binaries 0
14 TestDownloadOnly/v1.28.1/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
113 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
114 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
115 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
116 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
117 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
146 TestImageBuild 0
179 TestKicCustomNetwork 0
180 TestKicExistingNetwork 0
181 TestKicCustomSubnet 0
182 TestKicStaticIP 0
213 TestChangeNoneUser 0
216 TestScheduledStopWindows 0
218 TestSkaffold 0
220 TestInsufficientStorage 0
224 TestMissingContainerUpgrade 0
243 TestStartStop/group/disable-driver-mounts 0.13
247 TestNetworkPlugins/group/kubenet 5.18
255 TestNetworkPlugins/group/cilium 3.66
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-554424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-554424
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-585035 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-585035" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-585035" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17233-3774/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:05:48 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.72.88:8443
name: cert-expiration-395683
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17233-3774/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:05:15 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.50.61:8443
name: kubernetes-upgrade-100471
contexts:
- context:
cluster: cert-expiration-395683
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:05:48 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-395683
name: cert-expiration-395683
- context:
cluster: kubernetes-upgrade-100471
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:05:15 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: kubernetes-upgrade-100471
name: kubernetes-upgrade-100471
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-395683
user:
client-certificate: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/cert-expiration-395683/client.crt
client-key: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/cert-expiration-395683/client.key
- name: kubernetes-upgrade-100471
user:
client-certificate: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/kubernetes-upgrade-100471/client.crt
client-key: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/kubernetes-upgrade-100471/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-585035

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-585035"

                                                
                                                
----------------------- debugLogs end: kubenet-585035 [took: 5.039610632s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-585035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-585035
--- SKIP: TestNetworkPlugins/group/kubenet (5.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-585035 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-585035" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17233-3774/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:05:48 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.72.88:8443
name: cert-expiration-395683
contexts:
- context:
cluster: cert-expiration-395683
extensions:
- extension:
last-update: Tue, 12 Sep 2023 19:05:48 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-395683
name: cert-expiration-395683
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-395683
user:
client-certificate: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/cert-expiration-395683/client.crt
client-key: /home/jenkins/minikube-integration/17233-3774/.minikube/profiles/cert-expiration-395683/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-585035

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-585035" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585035"

                                                
                                                
----------------------- debugLogs end: cilium-585035 [took: 3.488240383s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-585035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-585035
--- SKIP: TestNetworkPlugins/group/cilium (3.66s)

                                                
                                    
Copied to clipboard