Test Report: KVM_Linux_containerd 18665

                    
                      dfbe577bff734bd70c7906dfbd0bc89e038b5d72:2024-04-17:34073
                    
                

Test fail (1/325)

Order failed test Duration
94 TestFunctional/parallel/ServiceCmdConnect 33.14
x
+
TestFunctional/parallel/ServiceCmdConnect (33.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-366561 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-366561 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-4gbv4" [5a5ace5c-ef6a-46be-b5e8-9e5e77bd917a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-4gbv4" [5a5ace5c-ef6a-46be-b5e8-9e5e77bd917a] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.006650235s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.13:31904
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.39.13:31904: Get "http://192.168.39.13:31904": dial tcp 192.168.39.13:31904: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-366561 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-57b4589c47-4gbv4
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-366561/192.168.39.13
Start Time:       Wed, 17 Apr 2024 18:04:31 +0000
Labels:           app=hello-node-connect
pod-template-hash=57b4589c47
Annotations:      <none>
Status:           Running
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-57b4589c47
Containers:
echoserver:
Container ID:   containerd://7da89b68f9a4f24d12e722e10201aa252f0291f25ddbcba78a9478cb89374d26
Image:          registry.k8s.io/echoserver:1.8
Image ID:       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Wed, 17 Apr 2024 18:04:34 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8wj92 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       True 
ContainersReady             True 
PodScheduled                True 
Volumes:
kube-api-access-8wj92:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  30s   default-scheduler  Successfully assigned default/hello-node-connect-57b4589c47-4gbv4 to functional-366561
Normal  Pulling    30s   kubelet            Pulling image "registry.k8s.io/echoserver:1.8"
Normal  Pulled     27s   kubelet            Successfully pulled image "registry.k8s.io/echoserver:1.8" in 2.939s (2.939s including waiting). Image size: 46237695 bytes.
Normal  Created    27s   kubelet            Created container echoserver
Normal  Started    27s   kubelet            Started container echoserver

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-366561 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-366561 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.164.202
IPs:                      10.100.164.202
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31904/TCP
Endpoints:                10.244.0.5:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-366561 -n functional-366561
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 logs -n 25: (2.063662913s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------|-------------------|---------|----------------|---------------------|---------------------|
	| Command |                                Args                                 |      Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------|-------------------|---------|----------------|---------------------|---------------------|
	| cp      | functional-366561 cp                                                | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | functional-366561:/home/docker/cp-test.txt                          |                   |         |                |                     |                     |
	|         | /tmp/TestFunctionalparallelCpCmd1307465259/001/cp-test.txt          |                   |         |                |                     |                     |
	| ssh     | functional-366561 ssh -n                                            | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | functional-366561 sudo cat                                          |                   |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                            |                   |         |                |                     |                     |
	| cp      | functional-366561 cp                                                | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | testdata/cp-test.txt                                                |                   |         |                |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                     |                   |         |                |                     |                     |
	| ssh     | functional-366561 ssh -n                                            | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | functional-366561 sudo cat                                          |                   |         |                |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                     |                   |         |                |                     |                     |
	| addons  | functional-366561 addons list                                       | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	| addons  | functional-366561 addons list                                       | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | -o json                                                             |                   |         |                |                     |                     |
	| service | functional-366561 service                                           | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | hello-node-connect --url                                            |                   |         |                |                     |                     |
	| service | functional-366561 service list                                      | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	| service | functional-366561 service list                                      | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | -o json                                                             |                   |         |                |                     |                     |
	| service | functional-366561 service                                           | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | --namespace=default --https                                         |                   |         |                |                     |                     |
	|         | --url hello-node                                                    |                   |         |                |                     |                     |
	| service | functional-366561                                                   | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | service hello-node --url                                            |                   |         |                |                     |                     |
	|         | --format={{.IP}}                                                    |                   |         |                |                     |                     |
	| service | functional-366561 service                                           | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | hello-node --url                                                    |                   |         |                |                     |                     |
	| mount   | -p functional-366561                                                | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdany-port2068520610/001:/mount-9p |                   |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                              |                   |         |                |                     |                     |
	| ssh     | functional-366561 ssh findmnt                                       | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC |                     |
	|         | -T /mount-9p | grep 9p                                              |                   |         |                |                     |                     |
	| ssh     | functional-366561 ssh findmnt                                       | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | -T /mount-9p | grep 9p                                              |                   |         |                |                     |                     |
	| ssh     | functional-366561 ssh -- ls                                         | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | -la /mount-9p                                                       |                   |         |                |                     |                     |
	| ssh     | functional-366561 ssh cat                                           | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	|         | /mount-9p/test-1713377095812280804                                  |                   |         |                |                     |                     |
	| license |                                                                     | minikube          | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC | 17 Apr 24 18:04 UTC |
	| start   | -p functional-366561                                                | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:04 UTC |                     |
	|         | --dry-run --memory                                                  |                   |         |                |                     |                     |
	|         | 250MB --alsologtostderr                                             |                   |         |                |                     |                     |
	|         | --driver=kvm2                                                       |                   |         |                |                     |                     |
	|         | --container-runtime=containerd                                      |                   |         |                |                     |                     |
	| start   | -p functional-366561                                                | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC |                     |
	|         | --dry-run --alsologtostderr                                         |                   |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                                                  |                   |         |                |                     |                     |
	|         | --container-runtime=containerd                                      |                   |         |                |                     |                     |
	| ssh     | functional-366561 ssh sudo                                          | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC |                     |
	|         | systemctl is-active docker                                          |                   |         |                |                     |                     |
	| ssh     | functional-366561 ssh sudo                                          | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC |                     |
	|         | systemctl is-active crio                                            |                   |         |                |                     |                     |
	| start   | -p functional-366561                                                | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC |                     |
	|         | --dry-run --memory                                                  |                   |         |                |                     |                     |
	|         | 250MB --alsologtostderr                                             |                   |         |                |                     |                     |
	|         | --driver=kvm2                                                       |                   |         |                |                     |                     |
	|         | --container-runtime=containerd                                      |                   |         |                |                     |                     |
	| ssh     | functional-366561 ssh stat                                          | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC | 17 Apr 24 18:05 UTC |
	|         | /mount-9p/created-by-test                                           |                   |         |                |                     |                     |
	| ssh     | functional-366561 ssh stat                                          | functional-366561 | jenkins | v1.33.0-beta.0 | 17 Apr 24 18:05 UTC |                     |
	|         | /mount-9p/created-by-pod                                            |                   |         |                |                     |                     |
	|---------|---------------------------------------------------------------------|-------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 18:05:00
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 18:05:00.689214   89148 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:05:00.689343   89148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:05:00.689354   89148 out.go:304] Setting ErrFile to fd 2...
	I0417 18:05:00.689360   89148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:05:00.689665   89148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
	I0417 18:05:00.690202   89148 out.go:298] Setting JSON to false
	I0417 18:05:00.691132   89148 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6451,"bootTime":1713370650,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 18:05:00.691200   89148 start.go:139] virtualization: kvm guest
	I0417 18:05:00.693382   89148 out.go:177] * [functional-366561] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0417 18:05:00.694880   89148 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 18:05:00.694907   89148 notify.go:220] Checking for updates...
	I0417 18:05:00.696326   89148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 18:05:00.697790   89148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	I0417 18:05:00.699251   89148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	I0417 18:05:00.700506   89148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 18:05:00.701793   89148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 18:05:00.703419   89148 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
	I0417 18:05:00.703871   89148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:05:00.703930   89148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:05:00.718769   89148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I0417 18:05:00.719242   89148 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:05:00.719762   89148 main.go:141] libmachine: Using API Version  1
	I0417 18:05:00.719785   89148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:05:00.720162   89148 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:05:00.720372   89148 main.go:141] libmachine: (functional-366561) Calling .DriverName
	I0417 18:05:00.720634   89148 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 18:05:00.721029   89148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:05:00.721072   89148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:05:00.735828   89148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0417 18:05:00.736205   89148 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:05:00.736707   89148 main.go:141] libmachine: Using API Version  1
	I0417 18:05:00.736726   89148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:05:00.737042   89148 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:05:00.737233   89148 main.go:141] libmachine: (functional-366561) Calling .DriverName
	I0417 18:05:00.768327   89148 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0417 18:05:00.769638   89148 start.go:297] selected driver: kvm2
	I0417 18:05:00.769653   89148 start.go:901] validating driver "kvm2" against &{Name:functional-366561 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.2 ClusterName:functional-366561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:05:00.769802   89148 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 18:05:00.772031   89148 out.go:177] 
	W0417 18:05:00.773262   89148 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0417 18:05:00.774528   89148 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cf80d93b99aa7       c613f16b66424       2 seconds ago        Running             myfrontend                0                   d6c2da8642e5e       sp-pod
	f6a079fd72a2a       56cc512116c8f       3 seconds ago        Exited              mount-munger              0                   f3e7fdb0dbcb6       busybox-mount
	c12debe9bcc1c       82e4c8a736a4f       17 seconds ago       Running             echoserver                0                   e89e47745f23f       hello-node-6d85cfcfd8-cwvgp
	a399c3912b4f8       5107333e08a87       17 seconds ago       Running             mysql                     0                   5f6602d579413       mysql-64454c8b5c-jfzlw
	7da89b68f9a4f       82e4c8a736a4f       27 seconds ago       Running             echoserver                0                   c45df9fa26793       hello-node-connect-57b4589c47-4gbv4
	27bc1ca9864b2       6e38f40d628db       43 seconds ago       Running             storage-provisioner       3                   c6d64a6988634       storage-provisioner
	ea3a7a71a7dc1       6e38f40d628db       59 seconds ago       Exited              storage-provisioner       2                   c6d64a6988634       storage-provisioner
	6859c5e8477da       65a750108e0b6       About a minute ago   Running             kube-apiserver            0                   d0ddb1fb67c87       kube-apiserver-functional-366561
	1bcd935868e0d       ae2ef7918948c       About a minute ago   Running             kube-controller-manager   3                   1b96e43363af4       kube-controller-manager-functional-366561
	256991d3743fa       461015b94df4b       About a minute ago   Running             kube-scheduler            2                   ec7b0ae5de905       kube-scheduler-functional-366561
	aa47091bafb13       ae2ef7918948c       About a minute ago   Exited              kube-controller-manager   2                   1b96e43363af4       kube-controller-manager-functional-366561
	0e641817cefff       461015b94df4b       About a minute ago   Exited              kube-scheduler            1                   ec7b0ae5de905       kube-scheduler-functional-366561
	8c58f866c71d9       cbb01a7bd410d       About a minute ago   Running             coredns                   1                   f683f86cf1596       coredns-7db6d8ff4d-g75wp
	d237d1a584d9c       35c7fe5cdbee5       About a minute ago   Running             kube-proxy                1                   c82db30a1c5cf       kube-proxy-m26jt
	10806836be08d       3861cfcd7c04c       About a minute ago   Running             etcd                      1                   e428dc86d96ad       etcd-functional-366561
	41492958d0bb2       cbb01a7bd410d       2 minutes ago        Running             coredns                   1                   806d43c9858ed       coredns-7db6d8ff4d-vzg4j
	cdd0c69e76004       cbb01a7bd410d       2 minutes ago        Exited              coredns                   0                   806d43c9858ed       coredns-7db6d8ff4d-vzg4j
	b48d805ab66a5       cbb01a7bd410d       2 minutes ago        Exited              coredns                   0                   f683f86cf1596       coredns-7db6d8ff4d-g75wp
	52a94f7581927       35c7fe5cdbee5       2 minutes ago        Exited              kube-proxy                0                   c82db30a1c5cf       kube-proxy-m26jt
	478ddb0e91382       3861cfcd7c04c       2 minutes ago        Exited              etcd                      0                   e428dc86d96ad       etcd-functional-366561
	
	
	==> containerd <==
	Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.291374904Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.293498095Z" level=info msg="CreateContainer within sandbox \"f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27\" for container &ContainerMetadata{Name:mount-munger,Attempt:0,}"
	Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.297572019Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.334044750Z" level=info msg="CreateContainer within sandbox \"f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27\" for &ContainerMetadata{Name:mount-munger,Attempt:0,} returns container id \"f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42\""
	Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.335904721Z" level=info msg="StartContainer for \"f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42\""
	Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.401329303Z" level=info msg="StartContainer for \"f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42\" returns successfully"
	Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.461696089Z" level=info msg="shim disconnected" id=f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42 namespace=k8s.io
	Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.462066511Z" level=warning msg="cleaning up after shim disconnected" id=f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42 namespace=k8s.io
	Apr 17 18:04:59 functional-366561 containerd[3601]: time="2024-04-17T18:04:59.462218589Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.263631778Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.291422499Z" level=info msg="ImageUpdate event name:\"docker.io/library/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.293449623Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=5407"
	Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.319754385Z" level=info msg="Pulled image \"docker.io/nginx:latest\" with image id \"sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b\", repo tag \"docker.io/library/nginx:latest\", repo digest \"docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1\", size \"70542235\" in 1.027813098s"
	Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.319987267Z" level=info msg="PullImage \"docker.io/nginx:latest\" returns image reference \"sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b\""
	Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.323015874Z" level=info msg="CreateContainer within sandbox \"d6c2da8642e5e2b1ef173cb1371b53e777d1afabe34130a5fa36a0bb76cff97b\" for container &ContainerMetadata{Name:myfrontend,Attempt:0,}"
	Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.351371680Z" level=info msg="CreateContainer within sandbox \"d6c2da8642e5e2b1ef173cb1371b53e777d1afabe34130a5fa36a0bb76cff97b\" for &ContainerMetadata{Name:myfrontend,Attempt:0,} returns container id \"cf80d93b99aa7de33d81db882857f594cc3355657250d49a1534b623e2a74cc1\""
	Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.353430544Z" level=info msg="StartContainer for \"cf80d93b99aa7de33d81db882857f594cc3355657250d49a1534b623e2a74cc1\""
	Apr 17 18:05:00 functional-366561 containerd[3601]: time="2024-04-17T18:05:00.439815819Z" level=info msg="StartContainer for \"cf80d93b99aa7de33d81db882857f594cc3355657250d49a1534b623e2a74cc1\" returns successfully"
	Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.104985839Z" level=info msg="StopPodSandbox for \"f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27\""
	Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.105168651Z" level=info msg="Container to stop \"f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.173825088Z" level=info msg="shim disconnected" id=f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27 namespace=k8s.io
	Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.173974730Z" level=warning msg="cleaning up after shim disconnected" id=f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27 namespace=k8s.io
	Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.173987480Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.273465758Z" level=info msg="TearDown network for sandbox \"f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27\" successfully"
	Apr 17 18:05:01 functional-366561 containerd[3601]: time="2024-04-17T18:05:01.273729903Z" level=info msg="StopPodSandbox for \"f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27\" returns successfully"
	
	
	==> coredns [41492958d0bb25af9d3f69965a5868f447c8f1659ccc50f502acb303a3531d0f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54102 - 54224 "HINFO IN 4142035494611935772.3629735222420940855. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019738241s
	
	
	==> coredns [8c58f866c71d9c979b393430e87bccb0a4ecc1a2bea03541387ed7f87b3fe6ba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38187 - 24528 "HINFO IN 3452692630638164530.2872914596714558966. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020678086s
	
	
	==> coredns [b48d805ab66a54a98b61e4ba2e72441964ca0d6271db325d141e908a5124462c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35119 - 59750 "HINFO IN 1414768696273920048.3614774816264106578. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020575146s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1950557791]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 18:02:45.810) (total time: 11326ms):
	Trace[1950557791]: [11.326277794s] [11.326277794s] END
	[INFO] plugin/kubernetes: Trace[1143333661]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 18:02:45.810) (total time: 11326ms):
	Trace[1143333661]: [11.326569777s] [11.326569777s] END
	[INFO] plugin/kubernetes: Trace[1964501983]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (17-Apr-2024 18:02:45.807) (total time: 11328ms):
	Trace[1964501983]: [11.328788041s] [11.328788041s] END
	
	
	==> coredns [cdd0c69e760048777c6d66a275566376e7f9d5a41c70de7572976f351a2b409d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51527 - 65297 "HINFO IN 4082310492503323168.6429197130025581421. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019449222s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-366561
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-366561
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017d247dc519df02225d261a1d9173619e922e3
	                    minikube.k8s.io/name=functional-366561
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_17T18_02_31_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Apr 2024 18:02:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-366561
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Apr 2024 18:04:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Apr 2024 18:04:02 +0000   Wed, 17 Apr 2024 18:02:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Apr 2024 18:04:02 +0000   Wed, 17 Apr 2024 18:02:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Apr 2024 18:04:02 +0000   Wed, 17 Apr 2024 18:02:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Apr 2024 18:04:02 +0000   Wed, 17 Apr 2024 18:02:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    functional-366561
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 823279e9c69549e6bc8c5c18729a8d0c
	  System UUID:                823279e9-c695-49e6-bc8c-5c18729a8d0c
	  Boot ID:                    0b41b295-6710-4402-83cc-72001fb7e443
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.15
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d85cfcfd8-cwvgp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  default                     hello-node-connect-57b4589c47-4gbv4          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     mysql-64454c8b5c-jfzlw                       600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    31s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 coredns-7db6d8ff4d-g75wp                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m18s
	  kube-system                 coredns-7db6d8ff4d-vzg4j                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m18s
	  kube-system                 etcd-functional-366561                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-apiserver-functional-366561             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-controller-manager-functional-366561    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-proxy-m26jt                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-scheduler-functional-366561             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%!)(MISSING)  700m (35%!)(MISSING)
	  memory             752Mi (19%!)(MISSING)  1040Mi (27%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 117s                   kube-proxy       
	  Normal  Starting                 2m17s                  kube-proxy       
	  Normal  Starting                 2m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node functional-366561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node functional-366561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s (x7 over 2m38s)  kubelet          Node functional-366561 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m32s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m31s                  kubelet          Node functional-366561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m31s                  kubelet          Node functional-366561 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m31s                  kubelet          Node functional-366561 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m31s                  kubelet          Node functional-366561 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m19s                  node-controller  Node functional-366561 event: Registered Node functional-366561 in Controller
	  Normal  Starting                 109s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)    kubelet          Node functional-366561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)    kubelet          Node functional-366561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x7 over 108s)    kubelet          Node functional-366561 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           93s                    node-controller  Node functional-366561 event: Registered Node functional-366561 in Controller
	  Normal  Starting                 64s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  64s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  63s (x8 over 64s)      kubelet          Node functional-366561 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 64s)      kubelet          Node functional-366561 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x7 over 64s)      kubelet          Node functional-366561 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                    node-controller  Node functional-366561 event: Registered Node functional-366561 in Controller
	
	
	==> dmesg <==
	[  +0.212221] systemd-fstab-generator[2105]: Ignoring "noauto" option for root device
	[  +0.164837] systemd-fstab-generator[2117]: Ignoring "noauto" option for root device
	[  +0.374263] systemd-fstab-generator[2146]: Ignoring "noauto" option for root device
	[  +0.087498] kauditd_printk_skb: 188 callbacks suppressed
	[  +1.667032] systemd-fstab-generator[2324]: Ignoring "noauto" option for root device
	[  +6.015072] kauditd_printk_skb: 40 callbacks suppressed
	[Apr17 18:03] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.175121] systemd-fstab-generator[3045]: Ignoring "noauto" option for root device
	[ +10.994563] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.328915] systemd-fstab-generator[3223]: Ignoring "noauto" option for root device
	[ +13.663375] systemd-fstab-generator[3526]: Ignoring "noauto" option for root device
	[  +0.098141] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.073058] systemd-fstab-generator[3538]: Ignoring "noauto" option for root device
	[  +0.195498] systemd-fstab-generator[3552]: Ignoring "noauto" option for root device
	[  +0.169121] systemd-fstab-generator[3564]: Ignoring "noauto" option for root device
	[  +0.357903] systemd-fstab-generator[3593]: Ignoring "noauto" option for root device
	[  +1.164096] systemd-fstab-generator[3764]: Ignoring "noauto" option for root device
	[ +11.028153] kauditd_printk_skb: 125 callbacks suppressed
	[  +1.672117] systemd-fstab-generator[4027]: Ignoring "noauto" option for root device
	[Apr17 18:04] kauditd_printk_skb: 35 callbacks suppressed
	[ +16.529580] systemd-fstab-generator[4382]: Ignoring "noauto" option for root device
	[  +7.172942] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.026848] kauditd_printk_skb: 19 callbacks suppressed
	[ +10.510919] kauditd_printk_skb: 37 callbacks suppressed
	[  +8.661178] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [10806836be08d43ee757223b9498da7730088ad4d0be8de31221496d6b100de1] <==
	{"level":"info","ts":"2024-04-17T18:03:05.146035Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-17T18:03:05.147164Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.13:2379"}
	{"level":"info","ts":"2024-04-17T18:04:41.64658Z","caller":"traceutil/trace.go:171","msg":"trace[1211228161] linearizableReadLoop","detail":"{readStateIndex:808; appliedIndex:807; }","duration":"107.339551ms","start":"2024-04-17T18:04:41.539206Z","end":"2024-04-17T18:04:41.646545Z","steps":["trace[1211228161] 'read index received'  (duration: 107.221906ms)","trace[1211228161] 'applied index is now lower than readState.Index'  (duration: 117.261µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-17T18:04:41.646958Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.611309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/\" range_end:\"/registry/services/specs/default0\" ","response":"range_response_count:4 size:2637"}
	{"level":"info","ts":"2024-04-17T18:04:41.647031Z","caller":"traceutil/trace.go:171","msg":"trace[1252540934] range","detail":"{range_begin:/registry/services/specs/default/; range_end:/registry/services/specs/default0; response_count:4; response_revision:743; }","duration":"107.840705ms","start":"2024-04-17T18:04:41.539182Z","end":"2024-04-17T18:04:41.647022Z","steps":["trace[1252540934] 'agreement among raft nodes before linearized reading'  (duration: 107.539447ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-17T18:04:41.64721Z","caller":"traceutil/trace.go:171","msg":"trace[1644345999] transaction","detail":"{read_only:false; response_revision:743; number_of_response:1; }","duration":"220.146131ms","start":"2024-04-17T18:04:41.427057Z","end":"2024-04-17T18:04:41.647203Z","steps":["trace[1644345999] 'process raft request'  (duration: 219.410114ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-17T18:04:43.435445Z","caller":"traceutil/trace.go:171","msg":"trace[630185727] transaction","detail":"{read_only:false; response_revision:749; number_of_response:1; }","duration":"474.473599ms","start":"2024-04-17T18:04:42.960955Z","end":"2024-04-17T18:04:43.435429Z","steps":["trace[630185727] 'process raft request'  (duration: 474.351733ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T18:04:43.436778Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:04:42.960937Z","time spent":"474.603673ms","remote":"127.0.0.1:47936","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-bx6bj5iks4bhnv22tlivd722f4\" mod_revision:701 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-bx6bj5iks4bhnv22tlivd722f4\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-bx6bj5iks4bhnv22tlivd722f4\" > >"}
	{"level":"info","ts":"2024-04-17T18:04:43.437734Z","caller":"traceutil/trace.go:171","msg":"trace[749517454] linearizableReadLoop","detail":"{readStateIndex:815; appliedIndex:814; }","duration":"144.611352ms","start":"2024-04-17T18:04:43.29311Z","end":"2024-04-17T18:04:43.437721Z","steps":["trace[749517454] 'read index received'  (duration: 144.487295ms)","trace[749517454] 'applied index is now lower than readState.Index'  (duration: 123.522µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-17T18:04:43.439574Z","caller":"traceutil/trace.go:171","msg":"trace[844471637] transaction","detail":"{read_only:false; response_revision:750; number_of_response:1; }","duration":"259.865378ms","start":"2024-04-17T18:04:43.179695Z","end":"2024-04-17T18:04:43.439561Z","steps":["trace[844471637] 'process raft request'  (duration: 257.963711ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T18:04:43.440501Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.376668ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:11574"}
	{"level":"info","ts":"2024-04-17T18:04:43.440582Z","caller":"traceutil/trace.go:171","msg":"trace[944766252] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:750; }","duration":"147.48364ms","start":"2024-04-17T18:04:43.293087Z","end":"2024-04-17T18:04:43.440571Z","steps":["trace[944766252] 'agreement among raft nodes before linearized reading'  (duration: 147.30161ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T18:04:43.441022Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.70795ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-17T18:04:43.441093Z","caller":"traceutil/trace.go:171","msg":"trace[301249660] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:750; }","duration":"139.82738ms","start":"2024-04-17T18:04:43.301255Z","end":"2024-04-17T18:04:43.441083Z","steps":["trace[301249660] 'agreement among raft nodes before linearized reading'  (duration: 139.73386ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T18:04:43.441457Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.121984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.13\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-04-17T18:04:43.44153Z","caller":"traceutil/trace.go:171","msg":"trace[404705593] range","detail":"{range_begin:/registry/masterleases/192.168.39.13; range_end:; response_count:1; response_revision:750; }","duration":"102.224844ms","start":"2024-04-17T18:04:43.339294Z","end":"2024-04-17T18:04:43.441519Z","steps":["trace[404705593] 'agreement among raft nodes before linearized reading'  (duration: 102.09332ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-17T18:04:56.99051Z","caller":"traceutil/trace.go:171","msg":"trace[1739360161] linearizableReadLoop","detail":"{readStateIndex:849; appliedIndex:848; }","duration":"373.760741ms","start":"2024-04-17T18:04:56.616731Z","end":"2024-04-17T18:04:56.990492Z","steps":["trace[1739360161] 'read index received'  (duration: 373.545193ms)","trace[1739360161] 'applied index is now lower than readState.Index'  (duration: 214.695µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-17T18:04:56.990949Z","caller":"traceutil/trace.go:171","msg":"trace[1709261119] transaction","detail":"{read_only:false; response_revision:781; number_of_response:1; }","duration":"455.437141ms","start":"2024-04-17T18:04:56.535393Z","end":"2024-04-17T18:04:56.99083Z","steps":["trace[1709261119] 'process raft request'  (duration: 454.937013ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T18:04:56.991324Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:04:56.53538Z","time spent":"455.785924ms","remote":"127.0.0.1:47828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:780 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-17T18:04:56.992388Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"375.651535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/mysql-64454c8b5c-jfzlw\" ","response":"range_response_count:1 size:3147"}
	{"level":"info","ts":"2024-04-17T18:04:56.992775Z","caller":"traceutil/trace.go:171","msg":"trace[640641266] range","detail":"{range_begin:/registry/pods/default/mysql-64454c8b5c-jfzlw; range_end:; response_count:1; response_revision:781; }","duration":"376.049769ms","start":"2024-04-17T18:04:56.616706Z","end":"2024-04-17T18:04:56.992756Z","steps":["trace[640641266] 'agreement among raft nodes before linearized reading'  (duration: 375.61026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T18:04:56.993022Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:04:56.616692Z","time spent":"376.313705ms","remote":"127.0.0.1:47848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":3169,"request content":"key:\"/registry/pods/default/mysql-64454c8b5c-jfzlw\" "}
	{"level":"warn","ts":"2024-04-17T18:04:56.993029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"337.4053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:11892"}
	{"level":"info","ts":"2024-04-17T18:04:56.993371Z","caller":"traceutil/trace.go:171","msg":"trace[1581270615] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:781; }","duration":"338.214611ms","start":"2024-04-17T18:04:56.655145Z","end":"2024-04-17T18:04:56.993359Z","steps":["trace[1581270615] 'agreement among raft nodes before linearized reading'  (duration: 335.789156ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T18:04:56.994007Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:04:56.655132Z","time spent":"338.863067ms","remote":"127.0.0.1:47848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":11914,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	
	
	==> etcd [478ddb0e91382700b25ca95dbf3feb9d672ce1347b5c42af01a52430e6a5c6ab] <==
	{"level":"info","ts":"2024-04-17T18:02:26.301038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3fba3e6c6ecbcd became leader at term 2"}
	{"level":"info","ts":"2024-04-17T18:02:26.301045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1d3fba3e6c6ecbcd elected leader 1d3fba3e6c6ecbcd at term 2"}
	{"level":"info","ts":"2024-04-17T18:02:26.303917Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"1d3fba3e6c6ecbcd","local-member-attributes":"{Name:functional-366561 ClientURLs:[https://192.168.39.13:2379]}","request-path":"/0/members/1d3fba3e6c6ecbcd/attributes","cluster-id":"1e01947a35a5ac2c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-17T18:02:26.30397Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T18:02:26.304227Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T18:02:26.304447Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-17T18:02:26.307951Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-17T18:02:26.307996Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-17T18:02:26.313512Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-17T18:02:26.346246Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.13:2379"}
	{"level":"info","ts":"2024-04-17T18:02:26.346584Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1e01947a35a5ac2c","local-member-id":"1d3fba3e6c6ecbcd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T18:02:26.346707Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T18:02:26.346757Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-17T18:02:39.832929Z","caller":"traceutil/trace.go:171","msg":"trace[749136949] transaction","detail":"{read_only:false; response_revision:310; number_of_response:1; }","duration":"396.662962ms","start":"2024-04-17T18:02:39.436252Z","end":"2024-04-17T18:02:39.832915Z","steps":["trace[749136949] 'process raft request'  (duration: 396.520236ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-17T18:02:39.833363Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-17T18:02:39.436237Z","time spent":"396.76236ms","remote":"127.0.0.1:54928","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4509,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-functional-366561\" mod_revision:304 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-functional-366561\" value_size:4442 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-functional-366561\" > >"}
	{"level":"info","ts":"2024-04-17T18:03:02.443477Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-17T18:03:02.443571Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-366561","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"]}
	{"level":"warn","ts":"2024-04-17T18:03:02.443748Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-17T18:03:02.443902Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-17T18:03:02.457911Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-17T18:03:02.458041Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.13:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-17T18:03:02.458105Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1d3fba3e6c6ecbcd","current-leader-member-id":"1d3fba3e6c6ecbcd"}
	{"level":"info","ts":"2024-04-17T18:03:02.46112Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-04-17T18:03:02.461393Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.13:2380"}
	{"level":"info","ts":"2024-04-17T18:03:02.461513Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-366561","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.13:2380"],"advertise-client-urls":["https://192.168.39.13:2379"]}
	
	
	==> kernel <==
	 18:05:03 up 3 min,  0 users,  load average: 1.88, 0.79, 0.30
	Linux functional-366561 5.10.207 #1 SMP Tue Apr 16 07:56:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6859c5e8477dab1c555447ed617132d4ba504a478163f5ce75e0491d8a0eeb3f] <==
	I0417 18:04:02.020357       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0417 18:04:02.020940       1 shared_informer.go:320] Caches are synced for configmaps
	I0417 18:04:02.021926       1 aggregator.go:165] initial CRD sync complete...
	I0417 18:04:02.022130       1 autoregister_controller.go:141] Starting autoregister controller
	I0417 18:04:02.022240       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0417 18:04:02.022440       1 cache.go:39] Caches are synced for autoregister controller
	I0417 18:04:02.084649       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0417 18:04:02.096223       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0417 18:04:02.096359       1 policy_source.go:224] refreshing policies
	I0417 18:04:02.109995       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0417 18:04:02.929098       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0417 18:04:03.347918       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.13]
	I0417 18:04:03.349828       1 controller.go:615] quota admission added evaluator for: endpoints
	I0417 18:04:03.355734       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0417 18:04:03.514107       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0417 18:04:03.526409       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0417 18:04:03.561079       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0417 18:04:03.592615       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0417 18:04:03.599123       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0417 18:04:26.470107       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.202.175"}
	I0417 18:04:31.107558       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0417 18:04:31.222486       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.235.46"}
	I0417 18:04:31.245287       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.164.202"}
	I0417 18:04:33.934298       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.147.82"}
	E0417 18:04:57.128105       1 upgradeaware.go:427] Error proxying data from client to backend: write tcp 192.168.39.13:33596->192.168.39.13:10250: write: broken pipe
	
	
	==> kube-controller-manager [1bcd935868e0deede44c4cf609102a4b8e23c92c57550df7affaa8def8dde735] <==
	I0417 18:04:15.296045       1 shared_informer.go:320] Caches are synced for expand
	I0417 18:04:15.301165       1 shared_informer.go:320] Caches are synced for resource quota
	I0417 18:04:15.337154       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0417 18:04:15.345465       1 shared_informer.go:320] Caches are synced for resource quota
	I0417 18:04:15.726110       1 shared_informer.go:320] Caches are synced for garbage collector
	I0417 18:04:15.761758       1 shared_informer.go:320] Caches are synced for garbage collector
	I0417 18:04:15.761802       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0417 18:04:31.174519       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="61.150226ms"
	I0417 18:04:31.198786       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="24.208412ms"
	I0417 18:04:31.200127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="87.312µs"
	I0417 18:04:31.206734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="26.924µs"
	I0417 18:04:31.341093       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="39.762624ms"
	I0417 18:04:31.366673       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="25.054838ms"
	I0417 18:04:31.368566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="54.754µs"
	I0417 18:04:31.375594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="41.422µs"
	I0417 18:04:33.862951       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="56.884002ms"
	I0417 18:04:33.894482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="31.4716ms"
	I0417 18:04:33.959348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="64.803922ms"
	I0417 18:04:33.959463       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="78.319µs"
	I0417 18:04:35.020977       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="10.36698ms"
	I0417 18:04:35.021553       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="24.054µs"
	I0417 18:04:46.090788       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="21.274133ms"
	I0417 18:04:46.093400       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="73.858µs"
	I0417 18:04:46.129322       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="39.023212ms"
	I0417 18:04:46.134011       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="44.64µs"
	
	
	==> kube-controller-manager [aa47091bafb13d5035bb195f6ee2e5886c67ead7fdf2fc40f9136a8fa983bea6] <==
	I0417 18:03:29.519509       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0417 18:03:29.531022       1 shared_informer.go:320] Caches are synced for HPA
	I0417 18:03:29.532246       1 shared_informer.go:320] Caches are synced for resource quota
	I0417 18:03:29.536005       1 shared_informer.go:320] Caches are synced for disruption
	I0417 18:03:29.536920       1 shared_informer.go:320] Caches are synced for stateful set
	I0417 18:03:29.542688       1 shared_informer.go:320] Caches are synced for attach detach
	I0417 18:03:29.549714       1 shared_informer.go:320] Caches are synced for job
	I0417 18:03:29.550625       1 shared_informer.go:320] Caches are synced for GC
	I0417 18:03:29.552963       1 shared_informer.go:320] Caches are synced for deployment
	I0417 18:03:29.551981       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0417 18:03:29.555187       1 shared_informer.go:320] Caches are synced for PV protection
	I0417 18:03:29.555235       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0417 18:03:29.557751       1 shared_informer.go:320] Caches are synced for daemon sets
	I0417 18:03:29.565161       1 shared_informer.go:320] Caches are synced for taint
	I0417 18:03:29.565589       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0417 18:03:29.566266       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-366561"
	I0417 18:03:29.566556       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0417 18:03:29.572011       1 shared_informer.go:320] Caches are synced for persistent volume
	I0417 18:03:29.572455       1 shared_informer.go:320] Caches are synced for ephemeral
	I0417 18:03:29.574014       1 shared_informer.go:320] Caches are synced for endpoint
	I0417 18:03:29.579182       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.479715ms"
	I0417 18:03:29.580549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="44.579µs"
	I0417 18:03:30.021392       1 shared_informer.go:320] Caches are synced for garbage collector
	I0417 18:03:30.063236       1 shared_informer.go:320] Caches are synced for garbage collector
	I0417 18:03:30.063264       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [52a94f758192796b9d68414d2798e58b00d66ab1635824b8f2e7f35b3296dd52] <==
	I0417 18:02:45.817965       1 server_linux.go:69] "Using iptables proxy"
	I0417 18:02:45.831296       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.13"]
	I0417 18:02:45.917663       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0417 18:02:45.917724       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0417 18:02:45.917740       1 server_linux.go:165] "Using iptables Proxier"
	I0417 18:02:45.921246       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0417 18:02:45.921624       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0417 18:02:45.921662       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 18:02:45.922822       1 config.go:192] "Starting service config controller"
	I0417 18:02:45.923162       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0417 18:02:45.923560       1 config.go:101] "Starting endpoint slice config controller"
	I0417 18:02:45.923704       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0417 18:02:45.924566       1 config.go:319] "Starting node config controller"
	I0417 18:02:45.924602       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0417 18:02:46.024465       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0417 18:02:46.024544       1 shared_informer.go:320] Caches are synced for service config
	I0417 18:02:46.024811       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d237d1a584d9cad2024a5c96eb3893a636786d7554ded6fe60cf4936dc76bd0e] <==
	W0417 18:03:05.596637       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	E0417 18:03:05.596666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	W0417 18:03:05.596710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	E0417 18:03:05.596765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	W0417 18:03:06.936124       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	E0417 18:03:06.936173       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	W0417 18:03:07.007206       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	E0417 18:03:07.007553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	W0417 18:03:07.019700       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	E0417 18:03:07.019765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	W0417 18:03:09.553227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	E0417 18:03:09.553363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	W0417 18:03:09.625657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	E0417 18:03:09.626178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	W0417 18:03:10.184540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	E0417 18:03:10.184715       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	W0417 18:03:13.077401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	E0417 18:03:13.077537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-366561&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	W0417 18:03:14.699774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	E0417 18:03:14.699906       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	W0417 18:03:14.853353       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	E0417 18:03:14.853468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.13:8441: connect: connection refused
	I0417 18:03:22.696327       1 shared_informer.go:320] Caches are synced for node config
	I0417 18:03:23.095363       1 shared_informer.go:320] Caches are synced for service config
	I0417 18:03:24.696059       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [0e641817cefff59052b2dba024282cec36f455706fae6bec5b64e25d12999be1] <==
	I0417 18:03:05.456175       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0417 18:03:05.461797       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0417 18:03:05.462830       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	E0417 18:03:16.928638       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E0417 18:03:16.929123       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E0417 18:03:16.929423       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E0417 18:03:16.989787       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0417 18:03:16.994820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0417 18:03:16.996498       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)
	E0417 18:03:16.996973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0417 18:03:16.998247       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)
	E0417 18:03:16.998555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0417 18:03:16.998703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0417 18:03:17.000966       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0417 18:03:17.001248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0417 18:03:17.001493       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0417 18:03:17.001708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0417 18:03:17.001823       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0417 18:03:17.001987       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)
	I0417 18:03:56.872218       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0417 18:03:56.872393       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0417 18:03:56.872427       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0417 18:03:56.872517       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0417 18:03:56.872588       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController
	E0417 18:03:56.872199       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [256991d3743fae01871293d72fd9dc862d28ab3964f577ba5df2a4afe4e0dd43] <==
	I0417 18:04:00.964297       1 serving.go:380] Generated self-signed cert in-memory
	W0417 18:04:01.993754       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0417 18:04:01.993942       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0417 18:04:01.994403       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0417 18:04:01.994590       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0417 18:04:02.032352       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.2"
	I0417 18:04:02.032400       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0417 18:04:02.036066       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0417 18:04:02.036353       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0417 18:04:02.036528       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0417 18:04:02.036715       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0417 18:04:02.137602       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 17 18:04:57 functional-366561 kubelet[4034]: I0417 18:04:57.614454    4034 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cxs66\" (UniqueName: \"kubernetes.io/projected/af3e3341-24a0-4157-bc43-d12c80894555-kube-api-access-cxs66\") on node \"functional-366561\" DevicePath \"\""
	Apr 17 18:04:57 functional-366561 kubelet[4034]: I0417 18:04:57.614488    4034 reconciler_common.go:289] "Volume detached for volume \"pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d\" (UniqueName: \"kubernetes.io/host-path/af3e3341-24a0-4157-bc43-d12c80894555-pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d\") on node \"functional-366561\" DevicePath \"\""
	Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.089318    4034 scope.go:117] "RemoveContainer" containerID="34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587"
	Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.106910    4034 scope.go:117] "RemoveContainer" containerID="34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587"
	Apr 17 18:04:58 functional-366561 kubelet[4034]: E0417 18:04:58.109134    4034 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587\": not found" containerID="34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587"
	Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.109224    4034 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587"} err="failed to get container status \"34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587\": rpc error: code = NotFound desc = an error occurred when try to find container \"34598ebd5f47e710bed993410a252d27f67f10fb276ba566909821a7689d5587\": not found"
	Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.239493    4034 topology_manager.go:215] "Topology Admit Handler" podUID="cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c" podNamespace="default" podName="sp-pod"
	Apr 17 18:04:58 functional-366561 kubelet[4034]: E0417 18:04:58.239636    4034 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="af3e3341-24a0-4157-bc43-d12c80894555" containerName="myfrontend"
	Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.239667    4034 memory_manager.go:354] "RemoveStaleState removing state" podUID="af3e3341-24a0-4157-bc43-d12c80894555" containerName="myfrontend"
	Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.321404    4034 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d\" (UniqueName: \"kubernetes.io/host-path/cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c-pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d\") pod \"sp-pod\" (UID: \"cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c\") " pod="default/sp-pod"
	Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.321446    4034 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cwsz\" (UniqueName: \"kubernetes.io/projected/cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c-kube-api-access-5cwsz\") pod \"sp-pod\" (UID: \"cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c\") " pod="default/sp-pod"
	Apr 17 18:04:58 functional-366561 kubelet[4034]: I0417 18:04:58.771604    4034 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af3e3341-24a0-4157-bc43-d12c80894555" path="/var/lib/kubelet/pods/af3e3341-24a0-4157-bc43-d12c80894555/volumes"
	Apr 17 18:04:58 functional-366561 kubelet[4034]: E0417 18:04:58.785018    4034 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 17 18:04:58 functional-366561 kubelet[4034]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 17 18:04:58 functional-366561 kubelet[4034]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 17 18:04:58 functional-366561 kubelet[4034]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 17 18:04:58 functional-366561 kubelet[4034]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.300530    4034 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.76901149 podStartE2EDuration="3.300511764s" podCreationTimestamp="2024-04-17 18:04:58 +0000 UTC" firstStartedPulling="2024-04-17 18:04:58.789693572 +0000 UTC m=+60.177560344" lastFinishedPulling="2024-04-17 18:05:00.321193843 +0000 UTC m=+61.709060618" observedRunningTime="2024-04-17 18:05:01.142611071 +0000 UTC m=+62.530477861" watchObservedRunningTime="2024-04-17 18:05:01.300511764 +0000 UTC m=+62.688378555"
	Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.344652    4034 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ba240c32-9c8b-4857-b346-62c7edb3d934-test-volume\") pod \"ba240c32-9c8b-4857-b346-62c7edb3d934\" (UID: \"ba240c32-9c8b-4857-b346-62c7edb3d934\") "
	Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.345101    4034 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmm8b\" (UniqueName: \"kubernetes.io/projected/ba240c32-9c8b-4857-b346-62c7edb3d934-kube-api-access-zmm8b\") pod \"ba240c32-9c8b-4857-b346-62c7edb3d934\" (UID: \"ba240c32-9c8b-4857-b346-62c7edb3d934\") "
	Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.345040    4034 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba240c32-9c8b-4857-b346-62c7edb3d934-test-volume" (OuterVolumeSpecName: "test-volume") pod "ba240c32-9c8b-4857-b346-62c7edb3d934" (UID: "ba240c32-9c8b-4857-b346-62c7edb3d934"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.350488    4034 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba240c32-9c8b-4857-b346-62c7edb3d934-kube-api-access-zmm8b" (OuterVolumeSpecName: "kube-api-access-zmm8b") pod "ba240c32-9c8b-4857-b346-62c7edb3d934" (UID: "ba240c32-9c8b-4857-b346-62c7edb3d934"). InnerVolumeSpecName "kube-api-access-zmm8b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.446514    4034 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zmm8b\" (UniqueName: \"kubernetes.io/projected/ba240c32-9c8b-4857-b346-62c7edb3d934-kube-api-access-zmm8b\") on node \"functional-366561\" DevicePath \"\""
	Apr 17 18:05:01 functional-366561 kubelet[4034]: I0417 18:05:01.446591    4034 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/ba240c32-9c8b-4857-b346-62c7edb3d934-test-volume\") on node \"functional-366561\" DevicePath \"\""
	Apr 17 18:05:02 functional-366561 kubelet[4034]: I0417 18:05:02.112069    4034 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3e7fdb0dbcb6297d728449e3c43a6e474d573e183bce1f877adc31c533c4c27"
	
	
	==> storage-provisioner [27bc1ca9864b2ef502be0415eb0d969895ef1350c9eb88e73527cdb29a9f1c8b] <==
	I0417 18:04:18.952653       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0417 18:04:18.961274       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0417 18:04:18.961497       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0417 18:04:36.374388       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0417 18:04:36.374757       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-366561_9cec50bb-e786-46d5-b791-866f00d0ed89!
	I0417 18:04:36.376119       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a4a7f19e-ae80-4dc9-aee1-c37564a8ef17", APIVersion:"v1", ResourceVersion:"730", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-366561_9cec50bb-e786-46d5-b791-866f00d0ed89 became leader
	I0417 18:04:36.475218       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-366561_9cec50bb-e786-46d5-b791-866f00d0ed89!
	I0417 18:04:38.240070       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0417 18:04:38.242788       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"fd397dbb-c491-42d5-a6ab-c49321dbb37d", APIVersion:"v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0417 18:04:38.240700       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    19763129-b30d-4c94-821b-abbe36aa5948 382 0 2024-04-17 18:02:44 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-04-17 18:02:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d &PersistentVolumeClaim{ObjectMeta:{myclaim  default  fd397dbb-c491-42d5-a6ab-c49321dbb37d 733 0 2024-04-17 18:04:38 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-04-17 18:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-04-17 18:04:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0417 18:04:38.247761       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d" provisioned
	I0417 18:04:38.248087       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0417 18:04:38.248365       1 volume_store.go:212] Trying to save persistentvolume "pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d"
	I0417 18:04:38.281792       1 volume_store.go:219] persistentvolume "pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d" saved
	I0417 18:04:38.339435       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"fd397dbb-c491-42d5-a6ab-c49321dbb37d", APIVersion:"v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-fd397dbb-c491-42d5-a6ab-c49321dbb37d
	
	
	==> storage-provisioner [ea3a7a71a7dc10c70e0062c24936d2b324cf229928f68dded4693a554ad91238] <==
	I0417 18:04:03.178798       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0417 18:04:03.181648       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-366561 -n functional-366561
helpers_test.go:261: (dbg) Run:  kubectl --context functional-366561 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-366561 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-366561 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-366561/192.168.39.13
	Start Time:       Wed, 17 Apr 2024 18:04:57 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://f6a079fd72a2a5d710a189fa7ec9d3ed6d6d5025f686c32e695f14a11e5fbb42
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 17 Apr 2024 18:04:59 +0000
	      Finished:     Wed, 17 Apr 2024 18:04:59 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zmm8b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zmm8b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6s    default-scheduler  Successfully assigned default/busybox-mount to functional-366561
	  Normal  Pulling    7s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.477s (1.478s including waiting). Image size: 2395207 bytes.
	  Normal  Created    5s    kubelet            Created container mount-munger
	  Normal  Started    5s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (33.14s)

                                                
                                    

Test pass (288/325)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.04
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0-rc.2/json-events 9.14
13 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
17 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.14
19 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.58
22 TestOffline 103.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 144.34
29 TestAddons/parallel/Registry 16.76
30 TestAddons/parallel/Ingress 23.82
31 TestAddons/parallel/InspektorGadget 11.14
32 TestAddons/parallel/MetricsServer 5.85
33 TestAddons/parallel/HelmTiller 13.93
35 TestAddons/parallel/CSI 65.08
36 TestAddons/parallel/Headlamp 13.89
37 TestAddons/parallel/CloudSpanner 5.82
38 TestAddons/parallel/LocalPath 12.31
39 TestAddons/parallel/NvidiaDevicePlugin 6.7
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.12
44 TestAddons/StoppedEnableDisable 92.75
45 TestCertOptions 94.57
46 TestCertExpiration 295.32
48 TestForceSystemdFlag 50.53
49 TestForceSystemdEnv 50.68
51 TestKVMDriverInstallOrUpdate 3.61
55 TestErrorSpam/setup 45.64
56 TestErrorSpam/start 0.38
57 TestErrorSpam/status 0.77
58 TestErrorSpam/pause 1.63
59 TestErrorSpam/unpause 1.7
60 TestErrorSpam/stop 5.02
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 59.22
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 46.59
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.83
72 TestFunctional/serial/CacheCmd/cache/add_local 2.13
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.93
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.12
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 40.54
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 1.66
83 TestFunctional/serial/LogsFileCmd 1.69
84 TestFunctional/serial/InvalidService 4.75
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 14.48
88 TestFunctional/parallel/DryRun 0.29
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 0.85
95 TestFunctional/parallel/AddonsCmd 0.15
96 TestFunctional/parallel/PersistentVolumeClaim 34.76
98 TestFunctional/parallel/SSHCmd 0.45
99 TestFunctional/parallel/CpCmd 1.45
100 TestFunctional/parallel/MySQL 28.67
101 TestFunctional/parallel/FileSync 0.26
102 TestFunctional/parallel/CertSync 1.56
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
110 TestFunctional/parallel/License 0.19
120 TestFunctional/parallel/ServiceCmd/DeployApp 19.22
121 TestFunctional/parallel/ServiceCmd/List 0.45
122 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
123 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
124 TestFunctional/parallel/ServiceCmd/Format 0.32
125 TestFunctional/parallel/ServiceCmd/URL 0.31
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
127 TestFunctional/parallel/ProfileCmd/profile_list 0.31
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
129 TestFunctional/parallel/MountCmd/any-port 6.6
130 TestFunctional/parallel/Version/short 0.07
131 TestFunctional/parallel/Version/components 0.84
132 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
133 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
134 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
135 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
136 TestFunctional/parallel/ImageCommands/ImageBuild 3.77
137 TestFunctional/parallel/ImageCommands/Setup 1.39
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.93
139 TestFunctional/parallel/MountCmd/specific-port 1.97
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.04
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.74
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.19
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.55
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.22
150 TestFunctional/delete_addon-resizer_images 0.06
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 199.04
157 TestMultiControlPlane/serial/DeployApp 7.42
158 TestMultiControlPlane/serial/PingHostFromPods 1.32
159 TestMultiControlPlane/serial/AddWorkerNode 45.87
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.58
162 TestMultiControlPlane/serial/CopyFile 13.86
163 TestMultiControlPlane/serial/StopSecondaryNode 92.44
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.42
165 TestMultiControlPlane/serial/RestartSecondaryNode 44.44
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.57
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 439.93
168 TestMultiControlPlane/serial/DeleteSecondaryNode 8.15
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
170 TestMultiControlPlane/serial/StopCluster 276.54
171 TestMultiControlPlane/serial/RestartCluster 155.43
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
173 TestMultiControlPlane/serial/AddSecondaryNode 75.13
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
178 TestJSONOutput/start/Command 61.84
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.75
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.66
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.35
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.06
207 TestMinikubeProfile 93.86
210 TestMountStart/serial/StartWithMountFirst 28.45
211 TestMountStart/serial/VerifyMountFirst 0.38
212 TestMountStart/serial/StartWithMountSecond 30.13
213 TestMountStart/serial/VerifyMountSecond 0.39
214 TestMountStart/serial/DeleteFirst 0.66
215 TestMountStart/serial/VerifyMountPostDelete 0.4
216 TestMountStart/serial/Stop 1.59
217 TestMountStart/serial/RestartStopped 23.14
218 TestMountStart/serial/VerifyMountPostStop 0.41
221 TestMultiNode/serial/FreshStart2Nodes 104.21
222 TestMultiNode/serial/DeployApp2Nodes 4.28
223 TestMultiNode/serial/PingHostFrom2Pods 0.88
224 TestMultiNode/serial/AddNode 42.53
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.24
227 TestMultiNode/serial/CopyFile 7.69
228 TestMultiNode/serial/StopNode 2.44
229 TestMultiNode/serial/StartAfterStop 26.61
230 TestMultiNode/serial/RestartKeepsNodes 300.31
231 TestMultiNode/serial/DeleteNode 2.37
232 TestMultiNode/serial/StopMultiNode 184.13
233 TestMultiNode/serial/RestartMultiNode 86.05
234 TestMultiNode/serial/ValidateNameConflict 50.59
239 TestPreload 270.45
241 TestScheduledStopUnix 120.6
245 TestRunningBinaryUpgrade 211.45
247 TestKubernetesUpgrade 179.42
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
251 TestNoKubernetes/serial/StartWithK8s 100.93
252 TestNoKubernetes/serial/StartWithStopK8s 46.31
261 TestPause/serial/Start 89.65
262 TestNoKubernetes/serial/Start 35.85
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
264 TestNoKubernetes/serial/ProfileList 16.01
265 TestPause/serial/SecondStartNoReconfiguration 54.15
266 TestNoKubernetes/serial/Stop 1.5
267 TestNoKubernetes/serial/StartNoArgs 30.76
275 TestNetworkPlugins/group/false 4.29
279 TestStoppedBinaryUpgrade/Setup 0.67
280 TestStoppedBinaryUpgrade/Upgrade 192.94
281 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
282 TestPause/serial/Pause 0.74
283 TestPause/serial/VerifyStatus 0.25
284 TestPause/serial/Unpause 0.65
285 TestPause/serial/PauseAgain 0.8
286 TestPause/serial/DeletePaused 0.98
287 TestPause/serial/VerifyDeletedResources 0.11
289 TestStartStop/group/old-k8s-version/serial/FirstStart 196.06
291 TestStartStop/group/embed-certs/serial/FirstStart 151.7
292 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
294 TestStartStop/group/no-preload/serial/FirstStart 114.04
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.46
296 TestStartStop/group/embed-certs/serial/DeployApp 10.32
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.04
298 TestStartStop/group/old-k8s-version/serial/Stop 93.5
299 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
300 TestStartStop/group/embed-certs/serial/Stop 92.47
301 TestStartStop/group/no-preload/serial/DeployApp 9.32
303 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.28
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
305 TestStartStop/group/no-preload/serial/Stop 92.48
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
307 TestStartStop/group/old-k8s-version/serial/SecondStart 485.91
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
309 TestStartStop/group/embed-certs/serial/SecondStart 336.66
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.33
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.48
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
314 TestStartStop/group/no-preload/serial/SecondStart 316.71
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 319.56
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.01
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
319 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
320 TestStartStop/group/embed-certs/serial/Pause 3.02
322 TestStartStop/group/newest-cni/serial/FirstStart 65.7
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
326 TestStartStop/group/no-preload/serial/Pause 3.79
327 TestNetworkPlugins/group/auto/Start 64.93
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.25
330 TestStartStop/group/newest-cni/serial/Stop 2.37
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
332 TestStartStop/group/newest-cni/serial/SecondStart 40.75
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.38
337 TestNetworkPlugins/group/auto/KubeletFlags 0.26
338 TestNetworkPlugins/group/auto/NetCatPod 11.28
339 TestNetworkPlugins/group/kindnet/Start 71.31
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
343 TestStartStop/group/newest-cni/serial/Pause 2.98
344 TestNetworkPlugins/group/calico/Start 117.38
345 TestNetworkPlugins/group/auto/DNS 26.57
346 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
347 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
348 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
349 TestStartStop/group/old-k8s-version/serial/Pause 2.62
350 TestNetworkPlugins/group/custom-flannel/Start 117.21
351 TestNetworkPlugins/group/auto/Localhost 0.14
352 TestNetworkPlugins/group/auto/HairPin 0.16
353 TestNetworkPlugins/group/enable-default-cni/Start 134.82
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
356 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
357 TestNetworkPlugins/group/kindnet/DNS 0.19
358 TestNetworkPlugins/group/kindnet/Localhost 0.15
359 TestNetworkPlugins/group/kindnet/HairPin 0.16
360 TestNetworkPlugins/group/flannel/Start 90.11
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/calico/KubeletFlags 0.24
363 TestNetworkPlugins/group/calico/NetCatPod 10.28
364 TestNetworkPlugins/group/calico/DNS 0.2
365 TestNetworkPlugins/group/calico/Localhost 0.18
366 TestNetworkPlugins/group/calico/HairPin 0.19
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.17
369 TestNetworkPlugins/group/custom-flannel/DNS 0.19
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
372 TestNetworkPlugins/group/bridge/Start 102.84
373 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
374 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.25
375 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
380 TestNetworkPlugins/group/flannel/NetCatPod 10.32
381 TestNetworkPlugins/group/flannel/DNS 0.17
382 TestNetworkPlugins/group/flannel/Localhost 0.13
383 TestNetworkPlugins/group/flannel/HairPin 0.13
384 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
385 TestNetworkPlugins/group/bridge/NetCatPod 11.24
386 TestNetworkPlugins/group/bridge/DNS 0.15
387 TestNetworkPlugins/group/bridge/Localhost 0.13
388 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (9.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-629558 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-629558 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (9.03825408s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-629558
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-629558: exit status 85 (71.666477ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-629558 | jenkins | v1.33.0-beta.0 | 17 Apr 24 17:55 UTC |          |
	|         | -p download-only-629558        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=containerd |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 17:55:11
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 17:55:11.563132   82536 out.go:291] Setting OutFile to fd 1 ...
	I0417 17:55:11.563283   82536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 17:55:11.563296   82536 out.go:304] Setting ErrFile to fd 2...
	I0417 17:55:11.563311   82536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 17:55:11.563899   82536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
	W0417 17:55:11.564438   82536 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18665-75265/.minikube/config/config.json: open /home/jenkins/minikube-integration/18665-75265/.minikube/config/config.json: no such file or directory
	I0417 17:55:11.565226   82536 out.go:298] Setting JSON to true
	I0417 17:55:11.566205   82536 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5862,"bootTime":1713370650,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 17:55:11.566272   82536 start.go:139] virtualization: kvm guest
	I0417 17:55:11.568956   82536 out.go:97] [download-only-629558] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 17:55:11.570434   82536 out.go:169] MINIKUBE_LOCATION=18665
	W0417 17:55:11.569085   82536 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18665-75265/.minikube/cache/preloaded-tarball: no such file or directory
	I0417 17:55:11.569137   82536 notify.go:220] Checking for updates...
	I0417 17:55:11.573221   82536 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 17:55:11.574752   82536 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	I0417 17:55:11.576222   82536 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	I0417 17:55:11.577614   82536 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0417 17:55:11.580222   82536 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0417 17:55:11.580463   82536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 17:55:11.615179   82536 out.go:97] Using the kvm2 driver based on user configuration
	I0417 17:55:11.615205   82536 start.go:297] selected driver: kvm2
	I0417 17:55:11.615214   82536 start.go:901] validating driver "kvm2" against <nil>
	I0417 17:55:11.615573   82536 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 17:55:11.615672   82536 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75265/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 17:55:11.631117   82536 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 17:55:11.631174   82536 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 17:55:11.631696   82536 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0417 17:55:11.631847   82536 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0417 17:55:11.631922   82536 cni.go:84] Creating CNI manager for ""
	I0417 17:55:11.631940   82536 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0417 17:55:11.631951   82536 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0417 17:55:11.632022   82536 start.go:340] cluster config:
	{Name:download-only-629558 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-629558 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 17:55:11.632254   82536 iso.go:125] acquiring lock: {Name:mkdb5ecc5c4e91e99ad7d2daa7006426e0e30784 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 17:55:11.634157   82536 out.go:97] Downloading VM boot image ...
	I0417 17:55:11.634196   82536 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18665-75265/.minikube/cache/iso/amd64/minikube-v1.33.0-1713236417-18649-amd64.iso
	I0417 17:55:14.478458   82536 out.go:97] Starting "download-only-629558" primary control-plane node in "download-only-629558" cluster
	I0417 17:55:14.478489   82536 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0417 17:55:14.502550   82536 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0417 17:55:14.502584   82536 cache.go:56] Caching tarball of preloaded images
	I0417 17:55:14.502784   82536 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0417 17:55:14.504386   82536 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0417 17:55:14.504418   82536 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0417 17:55:14.527318   82536 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18665-75265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-629558 host does not exist
	  To start a cluster, run: "minikube start -p download-only-629558"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-629558
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (9.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-759265 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-759265 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (9.135889509s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (9.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-759265
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-759265: exit status 85 (73.752024ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-629558 | jenkins | v1.33.0-beta.0 | 17 Apr 24 17:55 UTC |                     |
	|         | -p download-only-629558           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 17 Apr 24 17:55 UTC | 17 Apr 24 17:55 UTC |
	| delete  | -p download-only-629558           | download-only-629558 | jenkins | v1.33.0-beta.0 | 17 Apr 24 17:55 UTC | 17 Apr 24 17:55 UTC |
	| start   | -o=json --download-only           | download-only-759265 | jenkins | v1.33.0-beta.0 | 17 Apr 24 17:55 UTC |                     |
	|         | -p download-only-759265           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=containerd    |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/17 17:55:20
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0417 17:55:20.940598   82710 out.go:291] Setting OutFile to fd 1 ...
	I0417 17:55:20.940805   82710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 17:55:20.940831   82710 out.go:304] Setting ErrFile to fd 2...
	I0417 17:55:20.940838   82710 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 17:55:20.941017   82710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
	I0417 17:55:20.941549   82710 out.go:298] Setting JSON to true
	I0417 17:55:20.942382   82710 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5871,"bootTime":1713370650,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 17:55:20.942438   82710 start.go:139] virtualization: kvm guest
	I0417 17:55:20.944638   82710 out.go:97] [download-only-759265] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 17:55:20.946073   82710 out.go:169] MINIKUBE_LOCATION=18665
	I0417 17:55:20.944777   82710 notify.go:220] Checking for updates...
	I0417 17:55:20.948804   82710 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 17:55:20.950132   82710 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	I0417 17:55:20.951351   82710 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	I0417 17:55:20.952532   82710 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0417 17:55:20.954693   82710 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0417 17:55:20.954896   82710 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 17:55:20.986445   82710 out.go:97] Using the kvm2 driver based on user configuration
	I0417 17:55:20.986467   82710 start.go:297] selected driver: kvm2
	I0417 17:55:20.986472   82710 start.go:901] validating driver "kvm2" against <nil>
	I0417 17:55:20.986781   82710 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 17:55:20.986857   82710 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18665-75265/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0417 17:55:21.000988   82710 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0417 17:55:21.001062   82710 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0417 17:55:21.001540   82710 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0417 17:55:21.001676   82710 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0417 17:55:21.001726   82710 cni.go:84] Creating CNI manager for ""
	I0417 17:55:21.001739   82710 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0417 17:55:21.001747   82710 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0417 17:55:21.001798   82710 start.go:340] cluster config:
	{Name:download-only-759265 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-759265 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0417 17:55:21.001883   82710 iso.go:125] acquiring lock: {Name:mkdb5ecc5c4e91e99ad7d2daa7006426e0e30784 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0417 17:55:21.003363   82710 out.go:97] Starting "download-only-759265" primary control-plane node in "download-only-759265" cluster
	I0417 17:55:21.003374   82710 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime containerd
	I0417 17:55:21.025885   82710 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0417 17:55:21.025906   82710 cache.go:56] Caching tarball of preloaded images
	I0417 17:55:21.026020   82710 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime containerd
	I0417 17:55:21.027656   82710 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0417 17:55:21.027675   82710 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0417 17:55:21.048613   82710 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:dfcc3b0407e077e710ff902e47acd662 -> /home/jenkins/minikube-integration/18665-75265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0417 17:55:23.453018   82710 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0417 17:55:23.453104   82710 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18665-75265/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0417 17:55:24.180338   82710 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-rc.2 on containerd
	I0417 17:55:24.180680   82710 profile.go:143] Saving config to /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/download-only-759265/config.json ...
	I0417 17:55:24.180713   82710 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/download-only-759265/config.json: {Name:mkab24e400f71c04acfc87752d314cdebf2e309e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0417 17:55:24.180896   82710 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime containerd
	I0417 17:55:24.181017   82710 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18665-75265/.minikube/cache/linux/amd64/v1.30.0-rc.2/kubectl
	
	
	* The control-plane node download-only-759265 host does not exist
	  To start a cluster, run: "minikube start -p download-only-759265"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-759265
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-964249 --alsologtostderr --binary-mirror http://127.0.0.1:34013 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-964249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-964249
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (103.62s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-343117 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-343117 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m41.813888064s)
helpers_test.go:175: Cleaning up "offline-containerd-343117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-343117
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-343117: (1.801952307s)
--- PASS: TestOffline (103.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-526030
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-526030: exit status 85 (61.923462ms)

                                                
                                                
-- stdout --
	* Profile "addons-526030" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-526030"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-526030
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-526030: exit status 85 (61.049564ms)

                                                
                                                
-- stdout --
	* Profile "addons-526030" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-526030"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (144.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-526030 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-526030 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m24.33652798s)
--- PASS: TestAddons/Setup (144.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 16.694565ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-2z4pq" [bfdb408d-9797-4eb7-be26-eae6cf05dd64] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005022586s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nmth6" [9ee6d03b-492b-45ae-a08e-23c5cc061928] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006058008s
addons_test.go:340: (dbg) Run:  kubectl --context addons-526030 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-526030 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-526030 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.702017738s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 ip
2024/04/17 17:58:11 [DEBUG] GET http://192.168.39.10:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.76s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-526030 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-526030 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-526030 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b898777a-56d9-4647-ab44-cd8c404243d6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b898777a-56d9-4647-ab44-cd8c404243d6] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005009052s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-526030 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.10
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-526030 addons disable ingress-dns --alsologtostderr -v=1: (2.19698456s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-526030 addons disable ingress --alsologtostderr -v=1: (8.31101448s)
--- PASS: TestAddons/parallel/Ingress (23.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.14s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lshhl" [1a67dd51-281d-4cdd-b52a-7e16f67dfd8e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011494622s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-526030
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-526030: (6.132269664s)
--- PASS: TestAddons/parallel/InspektorGadget (11.14s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.460725ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-wd96j" [aae1c30a-55f9-45e0-bcb7-3c4ae43d9652] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00474731s
addons_test.go:415: (dbg) Run:  kubectl --context addons-526030 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.93s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 2.646688ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-fx24c" [a9b42e34-0ace-44b0-b502-8fab531ac901] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.00632169s
addons_test.go:473: (dbg) Run:  kubectl --context addons-526030 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-526030 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.317265593s)
addons_test.go:478: kubectl --context addons-526030 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.93s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 21.773008ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-526030 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-526030 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [581ec0aa-ccea-4528-b3d1-5a41fb85309b] Pending
helpers_test.go:344: "task-pv-pod" [581ec0aa-ccea-4528-b3d1-5a41fb85309b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [581ec0aa-ccea-4528-b3d1-5a41fb85309b] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004284809s
addons_test.go:584: (dbg) Run:  kubectl --context addons-526030 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-526030 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-526030 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-526030 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-526030 delete pod task-pv-pod: (1.315874912s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-526030 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-526030 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-526030 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5ccb878f-cbbe-4212-b39f-787c8cbd276b] Pending
helpers_test.go:344: "task-pv-pod-restore" [5ccb878f-cbbe-4212-b39f-787c8cbd276b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5ccb878f-cbbe-4212-b39f-787c8cbd276b] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004587885s
addons_test.go:626: (dbg) Run:  kubectl --context addons-526030 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-526030 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-526030 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-526030 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.765925638s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-526030 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-b9lcz" [7576f509-d688-457c-b34b-c624aa78c394] Pending
helpers_test.go:344: "headlamp-7559bf459f-b9lcz" [7576f509-d688-457c-b34b-c624aa78c394] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-b9lcz" [7576f509-d688-457c-b34b-c624aa78c394] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003609044s
--- PASS: TestAddons/parallel/Headlamp (13.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-q4mtl" [abdb0feb-6ede-44fb-90ff-d1eeb39e59f1] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005039341s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-526030
--- PASS: TestAddons/parallel/CloudSpanner (5.82s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-526030 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-526030 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-526030 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6c4c5e9f-e239-441b-8fc2-ce09b571cc4a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6c4c5e9f-e239-441b-8fc2-ce09b571cc4a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6c4c5e9f-e239-441b-8fc2-ce09b571cc4a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005230384s
addons_test.go:891: (dbg) Run:  kubectl --context addons-526030 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 ssh "cat /opt/local-path-provisioner/pvc-ba42df71-7ef8-47fd-98c2-aeaa32f40582_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-526030 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-526030 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-526030 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mt6jb" [56e2120c-a50a-48da-98c6-42f339573b6b] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008420448s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-526030
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-hlj6n" [93c05dcf-4112-4e9c-a20c-28c148d4c0e9] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004284901s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-526030 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-526030 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.75s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-526030
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-526030: (1m32.442888304s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-526030
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-526030
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-526030
--- PASS: TestAddons/StoppedEnableDisable (92.75s)

                                                
                                    
x
+
TestCertOptions (94.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-614877 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-614877 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m33.31993144s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-614877 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-614877 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-614877 -- "sudo cat /etc/kubernetes/admin.conf"
E0417 18:57:55.603947   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "cert-options-614877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-614877
--- PASS: TestCertOptions (94.57s)

                                                
                                    
x
+
TestCertExpiration (295.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-909544 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-909544 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m39.739341564s)
E0417 18:59:14.312352   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:59:31.260866   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-909544 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-909544 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (14.564359149s)
helpers_test.go:175: Cleaning up "cert-expiration-909544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-909544
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-909544: (1.010632866s)
--- PASS: TestCertExpiration (295.32s)

                                                
                                    
x
+
TestForceSystemdFlag (50.53s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-455104 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-455104 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (48.981228327s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-455104 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-455104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-455104
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-455104: (1.304678772s)
--- PASS: TestForceSystemdFlag (50.53s)

                                                
                                    
x
+
TestForceSystemdEnv (50.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-903990 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-903990 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (49.432670242s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-903990 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-903990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-903990
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-903990: (1.034173259s)
--- PASS: TestForceSystemdEnv (50.68s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.61s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.61s)

                                                
                                    
x
+
TestErrorSpam/setup (45.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-382392 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-382392 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-382392 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-382392 --driver=kvm2  --container-runtime=containerd: (45.638708644s)
--- PASS: TestErrorSpam/setup (45.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.02s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 stop: (1.593784443s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 stop: (1.479250801s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-382392 --log_dir /tmp/nospam-382392 stop: (1.945525631s)
--- PASS: TestErrorSpam/stop (5.02s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18665-75265/.minikube/files/etc/test/nested/copy/82524/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366561 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-366561 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (59.22444793s)
--- PASS: TestFunctional/serial/StartWithProxy (59.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (46.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366561 --alsologtostderr -v=8
E0417 18:02:55.603225   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:02:55.608962   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:02:55.619284   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:02:55.639549   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:02:55.679820   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:02:55.760193   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:02:55.920640   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:02:56.241256   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:02:56.882200   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:02:58.162905   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:03:00.723831   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:03:05.844848   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:03:16.085498   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-366561 --alsologtostderr -v=8: (46.592564155s)
functional_test.go:659: soft start took 46.593341853s for "functional-366561" cluster.
--- PASS: TestFunctional/serial/SoftStart (46.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-366561 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 cache add registry.k8s.io/pause:3.1: (1.236577918s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 cache add registry.k8s.io/pause:3.3: (1.231384356s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 cache add registry.k8s.io/pause:latest
E0417 18:03:36.565889   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 cache add registry.k8s.io/pause:latest: (1.357087609s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-366561 /tmp/TestFunctionalserialCacheCmdcacheadd_local416709360/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 cache add minikube-local-cache-test:functional-366561
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 cache add minikube-local-cache-test:functional-366561: (1.741211663s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 cache delete minikube-local-cache-test:functional-366561
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-366561
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366561 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (226.283848ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 cache reload: (1.230640556s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 kubectl -- --context functional-366561 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-366561 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.54s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366561 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0417 18:04:17.526201   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-366561 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.536140138s)
functional_test.go:757: restart took 40.536273827s for "functional-366561" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.54s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-366561 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 logs: (1.65695909s)
--- PASS: TestFunctional/serial/LogsCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 logs --file /tmp/TestFunctionalserialLogsFileCmd3239334043/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 logs --file /tmp/TestFunctionalserialLogsFileCmd3239334043/001/logs.txt: (1.685813533s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-366561 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-366561
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-366561: exit status 115 (291.706895ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.13:30288 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-366561 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-366561 delete -f testdata/invalidsvc.yaml: (1.248249506s)
--- PASS: TestFunctional/serial/InvalidService (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366561 config get cpus: exit status 14 (81.950078ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366561 config get cpus: exit status 14 (54.699354ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-366561 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-366561 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 89903: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366561 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-366561 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (149.215021ms)

                                                
                                                
-- stdout --
	* [functional-366561] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:04:59.949385   89021 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:04:59.949634   89021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:04:59.949644   89021 out.go:304] Setting ErrFile to fd 2...
	I0417 18:04:59.949649   89021 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:04:59.949874   89021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
	I0417 18:04:59.950428   89021 out.go:298] Setting JSON to false
	I0417 18:04:59.951349   89021 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6450,"bootTime":1713370650,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 18:04:59.951411   89021 start.go:139] virtualization: kvm guest
	I0417 18:04:59.953756   89021 out.go:177] * [functional-366561] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 18:04:59.955208   89021 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 18:04:59.955229   89021 notify.go:220] Checking for updates...
	I0417 18:04:59.956521   89021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 18:04:59.958356   89021 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	I0417 18:04:59.959699   89021 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	I0417 18:04:59.961066   89021 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 18:04:59.962392   89021 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 18:04:59.963964   89021 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
	I0417 18:04:59.964428   89021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:04:59.964479   89021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:04:59.980502   89021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0417 18:04:59.980924   89021 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:04:59.981549   89021 main.go:141] libmachine: Using API Version  1
	I0417 18:04:59.981575   89021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:04:59.981968   89021 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:04:59.982168   89021 main.go:141] libmachine: (functional-366561) Calling .DriverName
	I0417 18:04:59.982520   89021 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 18:04:59.982789   89021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:04:59.982827   89021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:04:59.997310   89021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37085
	I0417 18:04:59.997719   89021 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:04:59.998112   89021 main.go:141] libmachine: Using API Version  1
	I0417 18:04:59.998149   89021 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:04:59.998499   89021 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:04:59.998679   89021 main.go:141] libmachine: (functional-366561) Calling .DriverName
	I0417 18:05:00.031465   89021 out.go:177] * Using the kvm2 driver based on existing profile
	I0417 18:05:00.032867   89021 start.go:297] selected driver: kvm2
	I0417 18:05:00.032900   89021 start.go:901] validating driver "kvm2" against &{Name:functional-366561 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.2 ClusterName:functional-366561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:05:00.033034   89021 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 18:05:00.035237   89021 out.go:177] 
	W0417 18:05:00.036538   89021 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0417 18:05:00.037827   89021 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366561 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-366561 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-366561 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (146.15432ms)

                                                
                                                
-- stdout --
	* [functional-366561] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:05:00.689214   89148 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:05:00.689343   89148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:05:00.689354   89148 out.go:304] Setting ErrFile to fd 2...
	I0417 18:05:00.689360   89148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:05:00.689665   89148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
	I0417 18:05:00.690202   89148 out.go:298] Setting JSON to false
	I0417 18:05:00.691132   89148 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6451,"bootTime":1713370650,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 18:05:00.691200   89148 start.go:139] virtualization: kvm guest
	I0417 18:05:00.693382   89148 out.go:177] * [functional-366561] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0417 18:05:00.694880   89148 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 18:05:00.694907   89148 notify.go:220] Checking for updates...
	I0417 18:05:00.696326   89148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 18:05:00.697790   89148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	I0417 18:05:00.699251   89148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	I0417 18:05:00.700506   89148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 18:05:00.701793   89148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 18:05:00.703419   89148 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
	I0417 18:05:00.703871   89148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:05:00.703930   89148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:05:00.718769   89148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I0417 18:05:00.719242   89148 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:05:00.719762   89148 main.go:141] libmachine: Using API Version  1
	I0417 18:05:00.719785   89148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:05:00.720162   89148 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:05:00.720372   89148 main.go:141] libmachine: (functional-366561) Calling .DriverName
	I0417 18:05:00.720634   89148 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 18:05:00.721029   89148 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:05:00.721072   89148 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:05:00.735828   89148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0417 18:05:00.736205   89148 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:05:00.736707   89148 main.go:141] libmachine: Using API Version  1
	I0417 18:05:00.736726   89148 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:05:00.737042   89148 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:05:00.737233   89148 main.go:141] libmachine: (functional-366561) Calling .DriverName
	I0417 18:05:00.768327   89148 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0417 18:05:00.769638   89148 start.go:297] selected driver: kvm2
	I0417 18:05:00.769653   89148 start.go:901] validating driver "kvm2" against &{Name:functional-366561 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18649/minikube-v1.33.0-1713236417-18649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713236840-18649@sha256:c67dbc47b437ffe7d18f65acebd2213336466a75b1de10cec62939ffc450543e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0-rc.2 ClusterName:functional-366561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.13 Port:8441 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0417 18:05:00.769802   89148 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 18:05:00.772031   89148 out.go:177] 
	W0417 18:05:00.773262   89148 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0417 18:05:00.774528   89148 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ee9d05a1-869f-47d2-90dd-cacfd41c6775] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005652494s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-366561 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-366561 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-366561 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-366561 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-366561 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [af3e3341-24a0-4157-bc43-d12c80894555] Pending
helpers_test.go:344: "sp-pod" [af3e3341-24a0-4157-bc43-d12c80894555] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [af3e3341-24a0-4157-bc43-d12c80894555] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.344127484s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-366561 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-366561 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-366561 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c] Pending
helpers_test.go:344: "sp-pod" [cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cb0a7c74-189d-40f0-8b3c-d3fbc6324d0c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.016491491s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-366561 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh -n functional-366561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 cp functional-366561:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1307465259/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh -n functional-366561 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh -n functional-366561 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-366561 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-jfzlw" [3b855998-a21e-4107-9721-27547f2f5281] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-jfzlw" [3b855998-a21e-4107-9721-27547f2f5281] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.00401664s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-366561 exec mysql-64454c8b5c-jfzlw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-366561 exec mysql-64454c8b5c-jfzlw -- mysql -ppassword -e "show databases;": exit status 1 (149.19812ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-366561 exec mysql-64454c8b5c-jfzlw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-366561 exec mysql-64454c8b5c-jfzlw -- mysql -ppassword -e "show databases;": exit status 1 (188.717723ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-366561 exec mysql-64454c8b5c-jfzlw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-366561 exec mysql-64454c8b5c-jfzlw -- mysql -ppassword -e "show databases;": exit status 1 (164.481695ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-366561 exec mysql-64454c8b5c-jfzlw -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-366561 exec mysql-64454c8b5c-jfzlw -- mysql -ppassword -e "show databases;": exit status 1 (566.627731ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-366561 exec mysql-64454c8b5c-jfzlw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/82524/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "sudo cat /etc/test/nested/copy/82524/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/82524.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "sudo cat /etc/ssl/certs/82524.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/82524.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "sudo cat /usr/share/ca-certificates/82524.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/825242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "sudo cat /etc/ssl/certs/825242.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/825242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "sudo cat /usr/share/ca-certificates/825242.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-366561 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366561 ssh "sudo systemctl is-active docker": exit status 1 (246.806766ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366561 ssh "sudo systemctl is-active crio": exit status 1 (205.377201ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (19.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-366561 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-366561 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-cwvgp" [1c91ea8f-0981-42ff-90f1-f66dc5a476cf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-cwvgp" [1c91ea8f-0981-42ff-90f1-f66dc5a476cf] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 19.004751864s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (19.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 service list -o json
functional_test.go:1490: Took "448.412149ms" to run "out/minikube-linux-amd64 -p functional-366561 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.13:30176
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.13:30176
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "235.67249ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "75.021839ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "266.552282ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "55.902298ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-366561 /tmp/TestFunctionalparallelMountCmdany-port2068520610/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713377095812280804" to /tmp/TestFunctionalparallelMountCmdany-port2068520610/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713377095812280804" to /tmp/TestFunctionalparallelMountCmdany-port2068520610/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713377095812280804" to /tmp/TestFunctionalparallelMountCmdany-port2068520610/001/test-1713377095812280804
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366561 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (200.912796ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 17 18:04 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 17 18:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 17 18:04 test-1713377095812280804
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh cat /mount-9p/test-1713377095812280804
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-366561 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ba240c32-9c8b-4857-b346-62c7edb3d934] Pending
helpers_test.go:344: "busybox-mount" [ba240c32-9c8b-4857-b346-62c7edb3d934] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ba240c32-9c8b-4857-b346-62c7edb3d934] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ba240c32-9c8b-4857-b346-62c7edb3d934] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.009837692s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-366561 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366561 /tmp/TestFunctionalparallelMountCmdany-port2068520610/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.60s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-366561 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0-rc.2
registry.k8s.io/kube-proxy:v1.30.0-rc.2
registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
registry.k8s.io/kube-apiserver:v1.30.0-rc.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-366561
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-366561
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-366561 image ls --format short --alsologtostderr:
I0417 18:05:21.464513   90486 out.go:291] Setting OutFile to fd 1 ...
I0417 18:05:21.464649   90486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:21.464694   90486 out.go:304] Setting ErrFile to fd 2...
I0417 18:05:21.464708   90486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:21.464977   90486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
I0417 18:05:21.465803   90486 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
I0417 18:05:21.465981   90486 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
I0417 18:05:21.466492   90486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:21.466559   90486 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:21.481024   90486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46287
I0417 18:05:21.481647   90486 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:21.482264   90486 main.go:141] libmachine: Using API Version  1
I0417 18:05:21.482296   90486 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:21.482661   90486 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:21.482829   90486 main.go:141] libmachine: (functional-366561) Calling .GetState
I0417 18:05:21.485130   90486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:21.485177   90486 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:21.500363   90486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39009
I0417 18:05:21.501202   90486 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:21.501821   90486 main.go:141] libmachine: Using API Version  1
I0417 18:05:21.501845   90486 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:21.502186   90486 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:21.502622   90486 main.go:141] libmachine: (functional-366561) Calling .DriverName
I0417 18:05:21.502830   90486 ssh_runner.go:195] Run: systemctl --version
I0417 18:05:21.502859   90486 main.go:141] libmachine: (functional-366561) Calling .GetSSHHostname
I0417 18:05:21.505806   90486 main.go:141] libmachine: (functional-366561) DBG | domain functional-366561 has defined MAC address 52:54:00:55:f6:a2 in network mk-functional-366561
I0417 18:05:21.506304   90486 main.go:141] libmachine: (functional-366561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:f6:a2", ip: ""} in network mk-functional-366561: {Iface:virbr1 ExpiryTime:2024-04-17 19:02:03 +0000 UTC Type:0 Mac:52:54:00:55:f6:a2 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-366561 Clientid:01:52:54:00:55:f6:a2}
I0417 18:05:21.506324   90486 main.go:141] libmachine: (functional-366561) DBG | domain functional-366561 has defined IP address 192.168.39.13 and MAC address 52:54:00:55:f6:a2 in network mk-functional-366561
I0417 18:05:21.506608   90486 main.go:141] libmachine: (functional-366561) Calling .GetSSHPort
I0417 18:05:21.506962   90486 main.go:141] libmachine: (functional-366561) Calling .GetSSHKeyPath
I0417 18:05:21.507156   90486 main.go:141] libmachine: (functional-366561) Calling .GetSSHUsername
I0417 18:05:21.507283   90486 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75265/.minikube/machines/functional-366561/id_rsa Username:docker}
I0417 18:05:21.613120   90486 ssh_runner.go:195] Run: sudo crictl images --output json
I0417 18:05:21.714853   90486 main.go:141] libmachine: Making call to close driver server
I0417 18:05:21.714878   90486 main.go:141] libmachine: (functional-366561) Calling .Close
I0417 18:05:21.715210   90486 main.go:141] libmachine: (functional-366561) DBG | Closing plugin on server side
I0417 18:05:21.715214   90486 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:05:21.715240   90486 main.go:141] libmachine: Making call to close connection to plugin binary
I0417 18:05:21.715252   90486 main.go:141] libmachine: Making call to close driver server
I0417 18:05:21.715263   90486 main.go:141] libmachine: (functional-366561) Calling .Close
I0417 18:05:21.715477   90486 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:05:21.715499   90486 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-366561 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| gcr.io/google-containers/addon-resizer      | functional-366561  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-controller-manager     | v1.30.0-rc.2       | sha256:ae2ef7 | 31MB   |
| registry.k8s.io/kube-proxy                  | v1.30.0-rc.2       | sha256:35c7fe | 29MB   |
| registry.k8s.io/kube-scheduler              | v1.30.0-rc.2       | sha256:461015 | 19.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/kube-apiserver              | v1.30.0-rc.2       | sha256:65a750 | 32.7MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/minikube-local-cache-test | functional-366561  | sha256:9aa730 | 990B   |
| docker.io/library/nginx                     | latest             | sha256:c613f1 | 70.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-366561 image ls --format table --alsologtostderr:
I0417 18:05:21.757173   90593 out.go:291] Setting OutFile to fd 1 ...
I0417 18:05:21.757278   90593 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:21.757287   90593 out.go:304] Setting ErrFile to fd 2...
I0417 18:05:21.757291   90593 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:21.757467   90593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
I0417 18:05:21.758031   90593 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
I0417 18:05:21.758123   90593 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
I0417 18:05:21.758454   90593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:21.758500   90593 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:21.772991   90593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
I0417 18:05:21.773479   90593 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:21.774064   90593 main.go:141] libmachine: Using API Version  1
I0417 18:05:21.774089   90593 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:21.774364   90593 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:21.774534   90593 main.go:141] libmachine: (functional-366561) Calling .GetState
I0417 18:05:21.776297   90593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:21.776339   90593 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:21.790408   90593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40041
I0417 18:05:21.790733   90593 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:21.791158   90593 main.go:141] libmachine: Using API Version  1
I0417 18:05:21.791184   90593 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:21.791462   90593 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:21.791633   90593 main.go:141] libmachine: (functional-366561) Calling .DriverName
I0417 18:05:21.791802   90593 ssh_runner.go:195] Run: systemctl --version
I0417 18:05:21.791821   90593 main.go:141] libmachine: (functional-366561) Calling .GetSSHHostname
I0417 18:05:21.794413   90593 main.go:141] libmachine: (functional-366561) DBG | domain functional-366561 has defined MAC address 52:54:00:55:f6:a2 in network mk-functional-366561
I0417 18:05:21.794745   90593 main.go:141] libmachine: (functional-366561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:f6:a2", ip: ""} in network mk-functional-366561: {Iface:virbr1 ExpiryTime:2024-04-17 19:02:03 +0000 UTC Type:0 Mac:52:54:00:55:f6:a2 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-366561 Clientid:01:52:54:00:55:f6:a2}
I0417 18:05:21.794772   90593 main.go:141] libmachine: (functional-366561) DBG | domain functional-366561 has defined IP address 192.168.39.13 and MAC address 52:54:00:55:f6:a2 in network mk-functional-366561
I0417 18:05:21.794949   90593 main.go:141] libmachine: (functional-366561) Calling .GetSSHPort
I0417 18:05:21.795102   90593 main.go:141] libmachine: (functional-366561) Calling .GetSSHKeyPath
I0417 18:05:21.795261   90593 main.go:141] libmachine: (functional-366561) Calling .GetSSHUsername
I0417 18:05:21.795391   90593 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75265/.minikube/machines/functional-366561/id_rsa Username:docker}
I0417 18:05:21.883145   90593 ssh_runner.go:195] Run: sudo crictl images --output json
I0417 18:05:21.929938   90593 main.go:141] libmachine: Making call to close driver server
I0417 18:05:21.929959   90593 main.go:141] libmachine: (functional-366561) Calling .Close
I0417 18:05:21.930237   90593 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:05:21.930270   90593 main.go:141] libmachine: Making call to close connection to plugin binary
I0417 18:05:21.930280   90593 main.go:141] libmachine: Making call to close driver server
I0417 18:05:21.930292   90593 main.go:141] libmachine: (functional-366561) Calling .Close
I0417 18:05:21.930349   90593 main.go:141] libmachine: (functional-366561) DBG | Closing plugin on server side
I0417 18:05:21.930514   90593 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:05:21.930528   90593 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-366561 image ls --format json --alsologtostderr:
[{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3c970620191febadad70f54370480a68daa722f3ba57f63ff2a71bfacd092053"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0-rc.2"],"size":"32662409"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io
/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:9aa73052298b8149c490bd2962430f4d8b4bd950b09fc60323bb2568148289a5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-366561"],"size":"990"},{"id":"sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":["docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1"],"repoTags":["docker.io/library/nginx:latest"],"size":"70542235"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-366561"],"size":"10823156"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:461015b94df4b9e0beae6963e44faa05142f2bddf
16b1956a2c09ccefe0416a6","repoDigests":["registry.k8s.io/kube-scheduler@sha256:08a79e6f8708e181c82380ee521a5eaa4a1598a00b2ca708a5f70201fb17e543"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0-rc.2"],"size":"19208499"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a200e9dde0e8d0f39b3f7739ca4c65c17f76e03a2a4990dc0ba1b30831009ed8"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0-rc.2"],"size":"31029986"},{"id":"sha256:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7
701deb6309be51431e","repoDigests":["registry.k8s.io/kube-proxy@sha256:0961badf165d0f1fed5c8b6e473b34d8c76a9318ae090a9071416c5731431ac5"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0-rc.2"],"size":"29020355"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:56cc512116c
8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-366561 image ls --format json --alsologtostderr:
I0417 18:05:21.469803   90487 out.go:291] Setting OutFile to fd 1 ...
I0417 18:05:21.469952   90487 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:21.469965   90487 out.go:304] Setting ErrFile to fd 2...
I0417 18:05:21.469970   90487 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:21.470197   90487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
I0417 18:05:21.470817   90487 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
I0417 18:05:21.470955   90487 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
I0417 18:05:21.471499   90487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:21.471571   90487 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:21.486231   90487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
I0417 18:05:21.486704   90487 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:21.487211   90487 main.go:141] libmachine: Using API Version  1
I0417 18:05:21.487230   90487 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:21.487678   90487 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:21.487867   90487 main.go:141] libmachine: (functional-366561) Calling .GetState
I0417 18:05:21.489986   90487 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:21.490058   90487 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:21.504162   90487 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36365
I0417 18:05:21.504718   90487 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:21.505282   90487 main.go:141] libmachine: Using API Version  1
I0417 18:05:21.505297   90487 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:21.505674   90487 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:21.505844   90487 main.go:141] libmachine: (functional-366561) Calling .DriverName
I0417 18:05:21.506055   90487 ssh_runner.go:195] Run: systemctl --version
I0417 18:05:21.506074   90487 main.go:141] libmachine: (functional-366561) Calling .GetSSHHostname
I0417 18:05:21.514353   90487 main.go:141] libmachine: (functional-366561) DBG | domain functional-366561 has defined MAC address 52:54:00:55:f6:a2 in network mk-functional-366561
I0417 18:05:21.514842   90487 main.go:141] libmachine: (functional-366561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:f6:a2", ip: ""} in network mk-functional-366561: {Iface:virbr1 ExpiryTime:2024-04-17 19:02:03 +0000 UTC Type:0 Mac:52:54:00:55:f6:a2 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-366561 Clientid:01:52:54:00:55:f6:a2}
I0417 18:05:21.514905   90487 main.go:141] libmachine: (functional-366561) DBG | domain functional-366561 has defined IP address 192.168.39.13 and MAC address 52:54:00:55:f6:a2 in network mk-functional-366561
I0417 18:05:21.515127   90487 main.go:141] libmachine: (functional-366561) Calling .GetSSHPort
I0417 18:05:21.515274   90487 main.go:141] libmachine: (functional-366561) Calling .GetSSHKeyPath
I0417 18:05:21.515419   90487 main.go:141] libmachine: (functional-366561) Calling .GetSSHUsername
I0417 18:05:21.515510   90487 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75265/.minikube/machines/functional-366561/id_rsa Username:docker}
I0417 18:05:21.605734   90487 ssh_runner.go:195] Run: sudo crictl images --output json
I0417 18:05:21.687482   90487 main.go:141] libmachine: Making call to close driver server
I0417 18:05:21.687502   90487 main.go:141] libmachine: (functional-366561) Calling .Close
I0417 18:05:21.687787   90487 main.go:141] libmachine: (functional-366561) DBG | Closing plugin on server side
I0417 18:05:21.687843   90487 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:05:21.687856   90487 main.go:141] libmachine: Making call to close connection to plugin binary
I0417 18:05:21.687863   90487 main.go:141] libmachine: Making call to close driver server
I0417 18:05:21.687874   90487 main.go:141] libmachine: (functional-366561) Calling .Close
I0417 18:05:21.688173   90487 main.go:141] libmachine: (functional-366561) DBG | Closing plugin on server side
I0417 18:05:21.688174   90487 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:05:21.688199   90487 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-366561 image ls --format yaml --alsologtostderr:
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-366561
size: "10823156"
- id: sha256:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3c970620191febadad70f54370480a68daa722f3ba57f63ff2a71bfacd092053
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0-rc.2
size: "32662409"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:9aa73052298b8149c490bd2962430f4d8b4bd950b09fc60323bb2568148289a5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-366561
size: "990"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests:
- docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1
repoTags:
- docker.io/library/nginx:latest
size: "70542235"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a200e9dde0e8d0f39b3f7739ca4c65c17f76e03a2a4990dc0ba1b30831009ed8
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
size: "31029986"
- id: sha256:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:08a79e6f8708e181c82380ee521a5eaa4a1598a00b2ca708a5f70201fb17e543
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0-rc.2
size: "19208499"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0961badf165d0f1fed5c8b6e473b34d8c76a9318ae090a9071416c5731431ac5
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0-rc.2
size: "29020355"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-366561 image ls --format yaml --alsologtostderr:
I0417 18:05:21.453627   90488 out.go:291] Setting OutFile to fd 1 ...
I0417 18:05:21.453827   90488 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:21.453842   90488 out.go:304] Setting ErrFile to fd 2...
I0417 18:05:21.453849   90488 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:21.454094   90488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
I0417 18:05:21.454732   90488 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
I0417 18:05:21.454853   90488 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
I0417 18:05:21.455284   90488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:21.455366   90488 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:21.477913   90488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36531
I0417 18:05:21.478559   90488 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:21.479169   90488 main.go:141] libmachine: Using API Version  1
I0417 18:05:21.479192   90488 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:21.479642   90488 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:21.479813   90488 main.go:141] libmachine: (functional-366561) Calling .GetState
I0417 18:05:21.483034   90488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:21.483106   90488 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:21.498311   90488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
I0417 18:05:21.498776   90488 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:21.499342   90488 main.go:141] libmachine: Using API Version  1
I0417 18:05:21.499364   90488 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:21.499772   90488 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:21.500002   90488 main.go:141] libmachine: (functional-366561) Calling .DriverName
I0417 18:05:21.500231   90488 ssh_runner.go:195] Run: systemctl --version
I0417 18:05:21.500260   90488 main.go:141] libmachine: (functional-366561) Calling .GetSSHHostname
I0417 18:05:21.503786   90488 main.go:141] libmachine: (functional-366561) DBG | domain functional-366561 has defined MAC address 52:54:00:55:f6:a2 in network mk-functional-366561
I0417 18:05:21.504742   90488 main.go:141] libmachine: (functional-366561) Calling .GetSSHPort
I0417 18:05:21.504747   90488 main.go:141] libmachine: (functional-366561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:f6:a2", ip: ""} in network mk-functional-366561: {Iface:virbr1 ExpiryTime:2024-04-17 19:02:03 +0000 UTC Type:0 Mac:52:54:00:55:f6:a2 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-366561 Clientid:01:52:54:00:55:f6:a2}
I0417 18:05:21.504768   90488 main.go:141] libmachine: (functional-366561) DBG | domain functional-366561 has defined IP address 192.168.39.13 and MAC address 52:54:00:55:f6:a2 in network mk-functional-366561
I0417 18:05:21.504943   90488 main.go:141] libmachine: (functional-366561) Calling .GetSSHKeyPath
I0417 18:05:21.505134   90488 main.go:141] libmachine: (functional-366561) Calling .GetSSHUsername
I0417 18:05:21.505297   90488 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75265/.minikube/machines/functional-366561/id_rsa Username:docker}
I0417 18:05:21.609501   90488 ssh_runner.go:195] Run: sudo crictl images --output json
I0417 18:05:21.719547   90488 main.go:141] libmachine: Making call to close driver server
I0417 18:05:21.719566   90488 main.go:141] libmachine: (functional-366561) Calling .Close
I0417 18:05:21.719812   90488 main.go:141] libmachine: (functional-366561) DBG | Closing plugin on server side
I0417 18:05:21.719860   90488 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:05:21.719877   90488 main.go:141] libmachine: Making call to close connection to plugin binary
I0417 18:05:21.719886   90488 main.go:141] libmachine: Making call to close driver server
I0417 18:05:21.719903   90488 main.go:141] libmachine: (functional-366561) Calling .Close
I0417 18:05:21.720206   90488 main.go:141] libmachine: (functional-366561) DBG | Closing plugin on server side
I0417 18:05:21.720270   90488 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:05:21.720310   90488 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366561 ssh pgrep buildkitd: exit status 1 (260.779958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image build -t localhost/my-image:functional-366561 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 image build -t localhost/my-image:functional-366561 testdata/build --alsologtostderr: (3.271961606s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-366561 image build -t localhost/my-image:functional-366561 testdata/build --alsologtostderr:
I0417 18:05:21.719420   90582 out.go:291] Setting OutFile to fd 1 ...
I0417 18:05:21.719546   90582 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:21.719553   90582 out.go:304] Setting ErrFile to fd 2...
I0417 18:05:21.719560   90582 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0417 18:05:21.722472   90582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
I0417 18:05:21.723779   90582 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
I0417 18:05:21.724465   90582 config.go:182] Loaded profile config "functional-366561": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
I0417 18:05:21.725033   90582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:21.725090   90582 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:21.742025   90582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44821
I0417 18:05:21.742420   90582 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:21.743027   90582 main.go:141] libmachine: Using API Version  1
I0417 18:05:21.743061   90582 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:21.743377   90582 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:21.743590   90582 main.go:141] libmachine: (functional-366561) Calling .GetState
I0417 18:05:21.745583   90582 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0417 18:05:21.745635   90582 main.go:141] libmachine: Launching plugin server for driver kvm2
I0417 18:05:21.761391   90582 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41565
I0417 18:05:21.761788   90582 main.go:141] libmachine: () Calling .GetVersion
I0417 18:05:21.762225   90582 main.go:141] libmachine: Using API Version  1
I0417 18:05:21.762246   90582 main.go:141] libmachine: () Calling .SetConfigRaw
I0417 18:05:21.762594   90582 main.go:141] libmachine: () Calling .GetMachineName
I0417 18:05:21.762767   90582 main.go:141] libmachine: (functional-366561) Calling .DriverName
I0417 18:05:21.762998   90582 ssh_runner.go:195] Run: systemctl --version
I0417 18:05:21.763029   90582 main.go:141] libmachine: (functional-366561) Calling .GetSSHHostname
I0417 18:05:21.765561   90582 main.go:141] libmachine: (functional-366561) DBG | domain functional-366561 has defined MAC address 52:54:00:55:f6:a2 in network mk-functional-366561
I0417 18:05:21.765970   90582 main.go:141] libmachine: (functional-366561) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:f6:a2", ip: ""} in network mk-functional-366561: {Iface:virbr1 ExpiryTime:2024-04-17 19:02:03 +0000 UTC Type:0 Mac:52:54:00:55:f6:a2 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:functional-366561 Clientid:01:52:54:00:55:f6:a2}
I0417 18:05:21.765998   90582 main.go:141] libmachine: (functional-366561) DBG | domain functional-366561 has defined IP address 192.168.39.13 and MAC address 52:54:00:55:f6:a2 in network mk-functional-366561
I0417 18:05:21.766110   90582 main.go:141] libmachine: (functional-366561) Calling .GetSSHPort
I0417 18:05:21.766297   90582 main.go:141] libmachine: (functional-366561) Calling .GetSSHKeyPath
I0417 18:05:21.766446   90582 main.go:141] libmachine: (functional-366561) Calling .GetSSHUsername
I0417 18:05:21.766574   90582 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75265/.minikube/machines/functional-366561/id_rsa Username:docker}
I0417 18:05:21.853716   90582 build_images.go:161] Building image from path: /tmp/build.3284895559.tar
I0417 18:05:21.853791   90582 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0417 18:05:21.866243   90582 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3284895559.tar
I0417 18:05:21.871531   90582 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3284895559.tar: stat -c "%s %y" /var/lib/minikube/build/build.3284895559.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3284895559.tar': No such file or directory
I0417 18:05:21.871582   90582 ssh_runner.go:362] scp /tmp/build.3284895559.tar --> /var/lib/minikube/build/build.3284895559.tar (3072 bytes)
I0417 18:05:21.910115   90582 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3284895559
I0417 18:05:21.933828   90582 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3284895559 -xf /var/lib/minikube/build/build.3284895559.tar
I0417 18:05:21.947899   90582 containerd.go:394] Building image: /var/lib/minikube/build/build.3284895559
I0417 18:05:21.947970   90582 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3284895559 --local dockerfile=/var/lib/minikube/build/build.3284895559 --output type=image,name=localhost/my-image:functional-366561
#1 [internal] load build definition from Dockerfile
#1 DONE 0.0s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:6cdf7d49458dd555ae87d9808d7fb821591dc065fdf6ee7ba1ab89f9432ac671 0.0s done
#8 exporting config sha256:02cbce529cb09e12103f33ba3fbac30901aed8cf4885699f972c41388a4bda22 0.0s done
#8 naming to localhost/my-image:functional-366561 done
#8 DONE 0.2s
I0417 18:05:24.884829   90582 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3284895559 --local dockerfile=/var/lib/minikube/build/build.3284895559 --output type=image,name=localhost/my-image:functional-366561: (2.936807515s)
I0417 18:05:24.884906   90582 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3284895559
I0417 18:05:24.902451   90582 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3284895559.tar
I0417 18:05:24.917246   90582 build_images.go:217] Built localhost/my-image:functional-366561 from /tmp/build.3284895559.tar
I0417 18:05:24.917282   90582 build_images.go:133] succeeded building to: functional-366561
I0417 18:05:24.917286   90582 build_images.go:134] failed building to: 
I0417 18:05:24.917311   90582 main.go:141] libmachine: Making call to close driver server
I0417 18:05:24.917322   90582 main.go:141] libmachine: (functional-366561) Calling .Close
I0417 18:05:24.917589   90582 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:05:24.917612   90582 main.go:141] libmachine: Making call to close connection to plugin binary
I0417 18:05:24.917622   90582 main.go:141] libmachine: Making call to close driver server
I0417 18:05:24.917622   90582 main.go:141] libmachine: (functional-366561) DBG | Closing plugin on server side
I0417 18:05:24.917631   90582 main.go:141] libmachine: (functional-366561) Calling .Close
I0417 18:05:24.917901   90582 main.go:141] libmachine: (functional-366561) DBG | Closing plugin on server side
I0417 18:05:24.917937   90582 main.go:141] libmachine: Successfully made call to close driver server
I0417 18:05:24.917964   90582 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.375972291s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-366561
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image load --daemon gcr.io/google-containers/addon-resizer:functional-366561 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 image load --daemon gcr.io/google-containers/addon-resizer:functional-366561 --alsologtostderr: (5.714689535s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-366561 /tmp/TestFunctionalparallelMountCmdspecific-port4165270640/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366561 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.50649ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366561 /tmp/TestFunctionalparallelMountCmdspecific-port4165270640/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366561 ssh "sudo umount -f /mount-9p": exit status 1 (228.841551ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-366561 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366561 /tmp/TestFunctionalparallelMountCmdspecific-port4165270640/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-366561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3750951838/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-366561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3750951838/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-366561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3750951838/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-366561 ssh "findmnt -T" /mount1: exit status 1 (257.738832ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-366561 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3750951838/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3750951838/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-366561 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3750951838/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image load --daemon gcr.io/google-containers/addon-resizer:functional-366561 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 image load --daemon gcr.io/google-containers/addon-resizer:functional-366561 --alsologtostderr: (2.815242465s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.157910301s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-366561
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image load --daemon gcr.io/google-containers/addon-resizer:functional-366561 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 image load --daemon gcr.io/google-containers/addon-resizer:functional-366561 --alsologtostderr: (4.338186293s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image save gcr.io/google-containers/addon-resizer:functional-366561 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 image save gcr.io/google-containers/addon-resizer:functional-366561 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.191660664s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image rm gcr.io/google-containers/addon-resizer:functional-366561 --alsologtostderr
2024/04/17 18:05:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.32463653s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-366561
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-366561 image save --daemon gcr.io/google-containers/addon-resizer:functional-366561 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-366561 image save --daemon gcr.io/google-containers/addon-resizer:functional-366561 --alsologtostderr: (1.187159706s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-366561
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-366561
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-366561
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-366561
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-833724 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0417 18:05:39.446514   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:07:55.604141   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:08:23.286921   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-833724 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m18.327490547s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-833724 -- rollout status deployment/busybox: (5.017305384s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-7kk92 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-d88z4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-tdjbv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-7kk92 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-d88z4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-tdjbv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-7kk92 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-d88z4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-tdjbv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-7kk92 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-7kk92 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-d88z4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-d88z4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-tdjbv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-833724 -- exec busybox-fc5497c4f-tdjbv -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-833724 -v=7 --alsologtostderr
E0417 18:09:31.261029   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:09:31.266433   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:09:31.276773   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:09:31.297127   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:09:31.337529   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:09:31.418376   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:09:31.578841   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:09:31.899929   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:09:32.540845   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:09:33.821638   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:09:36.382609   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-833724 -v=7 --alsologtostderr: (44.95728277s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-833724 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 status --output json -v=7 --alsologtostderr
E0417 18:09:41.503128   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp testdata/cp-test.txt ha-833724:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2203414529/001/cp-test_ha-833724.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724:/home/docker/cp-test.txt ha-833724-m02:/home/docker/cp-test_ha-833724_ha-833724-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m02 "sudo cat /home/docker/cp-test_ha-833724_ha-833724-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724:/home/docker/cp-test.txt ha-833724-m03:/home/docker/cp-test_ha-833724_ha-833724-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m03 "sudo cat /home/docker/cp-test_ha-833724_ha-833724-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724:/home/docker/cp-test.txt ha-833724-m04:/home/docker/cp-test_ha-833724_ha-833724-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m04 "sudo cat /home/docker/cp-test_ha-833724_ha-833724-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp testdata/cp-test.txt ha-833724-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2203414529/001/cp-test_ha-833724-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m02:/home/docker/cp-test.txt ha-833724:/home/docker/cp-test_ha-833724-m02_ha-833724.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724 "sudo cat /home/docker/cp-test_ha-833724-m02_ha-833724.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m02:/home/docker/cp-test.txt ha-833724-m03:/home/docker/cp-test_ha-833724-m02_ha-833724-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m03 "sudo cat /home/docker/cp-test_ha-833724-m02_ha-833724-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m02:/home/docker/cp-test.txt ha-833724-m04:/home/docker/cp-test_ha-833724-m02_ha-833724-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m04 "sudo cat /home/docker/cp-test_ha-833724-m02_ha-833724-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp testdata/cp-test.txt ha-833724-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2203414529/001/cp-test_ha-833724-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m03:/home/docker/cp-test.txt ha-833724:/home/docker/cp-test_ha-833724-m03_ha-833724.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724 "sudo cat /home/docker/cp-test_ha-833724-m03_ha-833724.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m03:/home/docker/cp-test.txt ha-833724-m02:/home/docker/cp-test_ha-833724-m03_ha-833724-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m02 "sudo cat /home/docker/cp-test_ha-833724-m03_ha-833724-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m03:/home/docker/cp-test.txt ha-833724-m04:/home/docker/cp-test_ha-833724-m03_ha-833724-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m04 "sudo cat /home/docker/cp-test_ha-833724-m03_ha-833724-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp testdata/cp-test.txt ha-833724-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m04 "sudo cat /home/docker/cp-test.txt"
E0417 18:09:51.744210   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2203414529/001/cp-test_ha-833724-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m04:/home/docker/cp-test.txt ha-833724:/home/docker/cp-test_ha-833724-m04_ha-833724.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724 "sudo cat /home/docker/cp-test_ha-833724-m04_ha-833724.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m04:/home/docker/cp-test.txt ha-833724-m02:/home/docker/cp-test_ha-833724-m04_ha-833724-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m02 "sudo cat /home/docker/cp-test_ha-833724-m04_ha-833724-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 cp ha-833724-m04:/home/docker/cp-test.txt ha-833724-m03:/home/docker/cp-test_ha-833724-m04_ha-833724-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 ssh -n ha-833724-m03 "sudo cat /home/docker/cp-test_ha-833724-m04_ha-833724-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (92.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 node stop m02 -v=7 --alsologtostderr
E0417 18:10:12.224479   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:10:53.185482   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-833724 node stop m02 -v=7 --alsologtostderr: (1m31.752677592s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-833724 status -v=7 --alsologtostderr: exit status 7 (691.117071ms)

                                                
                                                
-- stdout --
	ha-833724
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-833724-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833724-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-833724-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:11:26.425366   94733 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:11:26.425504   94733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:11:26.425516   94733 out.go:304] Setting ErrFile to fd 2...
	I0417 18:11:26.425523   94733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:11:26.425807   94733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
	I0417 18:11:26.426045   94733 out.go:298] Setting JSON to false
	I0417 18:11:26.426077   94733 mustload.go:65] Loading cluster: ha-833724
	I0417 18:11:26.426144   94733 notify.go:220] Checking for updates...
	I0417 18:11:26.426623   94733 config.go:182] Loaded profile config "ha-833724": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
	I0417 18:11:26.426646   94733 status.go:255] checking status of ha-833724 ...
	I0417 18:11:26.427163   94733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:11:26.427227   94733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:11:26.445940   94733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0417 18:11:26.446438   94733 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:11:26.447141   94733 main.go:141] libmachine: Using API Version  1
	I0417 18:11:26.447166   94733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:11:26.447601   94733 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:11:26.447852   94733 main.go:141] libmachine: (ha-833724) Calling .GetState
	I0417 18:11:26.449528   94733 status.go:330] ha-833724 host status = "Running" (err=<nil>)
	I0417 18:11:26.449547   94733 host.go:66] Checking if "ha-833724" exists ...
	I0417 18:11:26.449887   94733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:11:26.449932   94733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:11:26.467106   94733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46003
	I0417 18:11:26.467575   94733 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:11:26.468078   94733 main.go:141] libmachine: Using API Version  1
	I0417 18:11:26.468125   94733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:11:26.468560   94733 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:11:26.468767   94733 main.go:141] libmachine: (ha-833724) Calling .GetIP
	I0417 18:11:26.472284   94733 main.go:141] libmachine: (ha-833724) DBG | domain ha-833724 has defined MAC address 52:54:00:0c:c7:55 in network mk-ha-833724
	I0417 18:11:26.472861   94733 main.go:141] libmachine: (ha-833724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c7:55", ip: ""} in network mk-ha-833724: {Iface:virbr1 ExpiryTime:2024-04-17 19:05:42 +0000 UTC Type:0 Mac:52:54:00:0c:c7:55 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-833724 Clientid:01:52:54:00:0c:c7:55}
	I0417 18:11:26.472914   94733 main.go:141] libmachine: (ha-833724) DBG | domain ha-833724 has defined IP address 192.168.39.66 and MAC address 52:54:00:0c:c7:55 in network mk-ha-833724
	I0417 18:11:26.473127   94733 host.go:66] Checking if "ha-833724" exists ...
	I0417 18:11:26.473505   94733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:11:26.473546   94733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:11:26.489386   94733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44355
	I0417 18:11:26.489764   94733 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:11:26.490223   94733 main.go:141] libmachine: Using API Version  1
	I0417 18:11:26.490242   94733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:11:26.490572   94733 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:11:26.490766   94733 main.go:141] libmachine: (ha-833724) Calling .DriverName
	I0417 18:11:26.490962   94733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:11:26.490990   94733 main.go:141] libmachine: (ha-833724) Calling .GetSSHHostname
	I0417 18:11:26.493641   94733 main.go:141] libmachine: (ha-833724) DBG | domain ha-833724 has defined MAC address 52:54:00:0c:c7:55 in network mk-ha-833724
	I0417 18:11:26.494126   94733 main.go:141] libmachine: (ha-833724) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:c7:55", ip: ""} in network mk-ha-833724: {Iface:virbr1 ExpiryTime:2024-04-17 19:05:42 +0000 UTC Type:0 Mac:52:54:00:0c:c7:55 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:ha-833724 Clientid:01:52:54:00:0c:c7:55}
	I0417 18:11:26.494158   94733 main.go:141] libmachine: (ha-833724) DBG | domain ha-833724 has defined IP address 192.168.39.66 and MAC address 52:54:00:0c:c7:55 in network mk-ha-833724
	I0417 18:11:26.494294   94733 main.go:141] libmachine: (ha-833724) Calling .GetSSHPort
	I0417 18:11:26.494462   94733 main.go:141] libmachine: (ha-833724) Calling .GetSSHKeyPath
	I0417 18:11:26.494609   94733 main.go:141] libmachine: (ha-833724) Calling .GetSSHUsername
	I0417 18:11:26.494828   94733 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75265/.minikube/machines/ha-833724/id_rsa Username:docker}
	I0417 18:11:26.587491   94733 ssh_runner.go:195] Run: systemctl --version
	I0417 18:11:26.595295   94733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:11:26.616947   94733 kubeconfig.go:125] found "ha-833724" server: "https://192.168.39.254:8443"
	I0417 18:11:26.617003   94733 api_server.go:166] Checking apiserver status ...
	I0417 18:11:26.617048   94733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:11:26.637023   94733 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0417 18:11:26.648608   94733 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:11:26.648680   94733 ssh_runner.go:195] Run: ls
	I0417 18:11:26.654388   94733 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:11:26.658733   94733 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:11:26.658762   94733 status.go:422] ha-833724 apiserver status = Running (err=<nil>)
	I0417 18:11:26.658772   94733 status.go:257] ha-833724 status: &{Name:ha-833724 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:11:26.658793   94733 status.go:255] checking status of ha-833724-m02 ...
	I0417 18:11:26.659169   94733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:11:26.659216   94733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:11:26.675867   94733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46341
	I0417 18:11:26.676366   94733 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:11:26.676829   94733 main.go:141] libmachine: Using API Version  1
	I0417 18:11:26.676871   94733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:11:26.677186   94733 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:11:26.677380   94733 main.go:141] libmachine: (ha-833724-m02) Calling .GetState
	I0417 18:11:26.678890   94733 status.go:330] ha-833724-m02 host status = "Stopped" (err=<nil>)
	I0417 18:11:26.678904   94733 status.go:343] host is not running, skipping remaining checks
	I0417 18:11:26.678910   94733 status.go:257] ha-833724-m02 status: &{Name:ha-833724-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:11:26.678926   94733 status.go:255] checking status of ha-833724-m03 ...
	I0417 18:11:26.679261   94733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:11:26.679307   94733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:11:26.693564   94733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0417 18:11:26.693997   94733 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:11:26.694475   94733 main.go:141] libmachine: Using API Version  1
	I0417 18:11:26.694501   94733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:11:26.694791   94733 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:11:26.694925   94733 main.go:141] libmachine: (ha-833724-m03) Calling .GetState
	I0417 18:11:26.696534   94733 status.go:330] ha-833724-m03 host status = "Running" (err=<nil>)
	I0417 18:11:26.696558   94733 host.go:66] Checking if "ha-833724-m03" exists ...
	I0417 18:11:26.696940   94733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:11:26.696977   94733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:11:26.711563   94733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42291
	I0417 18:11:26.712038   94733 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:11:26.712722   94733 main.go:141] libmachine: Using API Version  1
	I0417 18:11:26.712768   94733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:11:26.713099   94733 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:11:26.713303   94733 main.go:141] libmachine: (ha-833724-m03) Calling .GetIP
	I0417 18:11:26.716224   94733 main.go:141] libmachine: (ha-833724-m03) DBG | domain ha-833724-m03 has defined MAC address 52:54:00:3d:41:c8 in network mk-ha-833724
	I0417 18:11:26.716718   94733 main.go:141] libmachine: (ha-833724-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:41:c8", ip: ""} in network mk-ha-833724: {Iface:virbr1 ExpiryTime:2024-04-17 19:07:53 +0000 UTC Type:0 Mac:52:54:00:3d:41:c8 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-833724-m03 Clientid:01:52:54:00:3d:41:c8}
	I0417 18:11:26.716742   94733 main.go:141] libmachine: (ha-833724-m03) DBG | domain ha-833724-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:3d:41:c8 in network mk-ha-833724
	I0417 18:11:26.716942   94733 host.go:66] Checking if "ha-833724-m03" exists ...
	I0417 18:11:26.717272   94733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:11:26.717306   94733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:11:26.733803   94733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0417 18:11:26.734269   94733 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:11:26.734721   94733 main.go:141] libmachine: Using API Version  1
	I0417 18:11:26.734740   94733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:11:26.735021   94733 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:11:26.735193   94733 main.go:141] libmachine: (ha-833724-m03) Calling .DriverName
	I0417 18:11:26.735391   94733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:11:26.735415   94733 main.go:141] libmachine: (ha-833724-m03) Calling .GetSSHHostname
	I0417 18:11:26.737996   94733 main.go:141] libmachine: (ha-833724-m03) DBG | domain ha-833724-m03 has defined MAC address 52:54:00:3d:41:c8 in network mk-ha-833724
	I0417 18:11:26.738386   94733 main.go:141] libmachine: (ha-833724-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3d:41:c8", ip: ""} in network mk-ha-833724: {Iface:virbr1 ExpiryTime:2024-04-17 19:07:53 +0000 UTC Type:0 Mac:52:54:00:3d:41:c8 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-833724-m03 Clientid:01:52:54:00:3d:41:c8}
	I0417 18:11:26.738409   94733 main.go:141] libmachine: (ha-833724-m03) DBG | domain ha-833724-m03 has defined IP address 192.168.39.116 and MAC address 52:54:00:3d:41:c8 in network mk-ha-833724
	I0417 18:11:26.738536   94733 main.go:141] libmachine: (ha-833724-m03) Calling .GetSSHPort
	I0417 18:11:26.738726   94733 main.go:141] libmachine: (ha-833724-m03) Calling .GetSSHKeyPath
	I0417 18:11:26.738904   94733 main.go:141] libmachine: (ha-833724-m03) Calling .GetSSHUsername
	I0417 18:11:26.739045   94733 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75265/.minikube/machines/ha-833724-m03/id_rsa Username:docker}
	I0417 18:11:26.827217   94733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:11:26.847597   94733 kubeconfig.go:125] found "ha-833724" server: "https://192.168.39.254:8443"
	I0417 18:11:26.847625   94733 api_server.go:166] Checking apiserver status ...
	I0417 18:11:26.847656   94733 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:11:26.865305   94733 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1159/cgroup
	W0417 18:11:26.882452   94733 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1159/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:11:26.882503   94733 ssh_runner.go:195] Run: ls
	I0417 18:11:26.887878   94733 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0417 18:11:26.892262   94733 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0417 18:11:26.892286   94733 status.go:422] ha-833724-m03 apiserver status = Running (err=<nil>)
	I0417 18:11:26.892298   94733 status.go:257] ha-833724-m03 status: &{Name:ha-833724-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:11:26.892318   94733 status.go:255] checking status of ha-833724-m04 ...
	I0417 18:11:26.892621   94733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:11:26.892652   94733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:11:26.909486   94733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0417 18:11:26.909911   94733 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:11:26.910517   94733 main.go:141] libmachine: Using API Version  1
	I0417 18:11:26.910544   94733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:11:26.910943   94733 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:11:26.911169   94733 main.go:141] libmachine: (ha-833724-m04) Calling .GetState
	I0417 18:11:26.912890   94733 status.go:330] ha-833724-m04 host status = "Running" (err=<nil>)
	I0417 18:11:26.912914   94733 host.go:66] Checking if "ha-833724-m04" exists ...
	I0417 18:11:26.913182   94733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:11:26.913239   94733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:11:26.929925   94733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0417 18:11:26.930420   94733 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:11:26.930885   94733 main.go:141] libmachine: Using API Version  1
	I0417 18:11:26.930907   94733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:11:26.931259   94733 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:11:26.931450   94733 main.go:141] libmachine: (ha-833724-m04) Calling .GetIP
	I0417 18:11:26.934416   94733 main.go:141] libmachine: (ha-833724-m04) DBG | domain ha-833724-m04 has defined MAC address 52:54:00:72:00:38 in network mk-ha-833724
	I0417 18:11:26.934839   94733 main.go:141] libmachine: (ha-833724-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:00:38", ip: ""} in network mk-ha-833724: {Iface:virbr1 ExpiryTime:2024-04-17 19:09:10 +0000 UTC Type:0 Mac:52:54:00:72:00:38 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-833724-m04 Clientid:01:52:54:00:72:00:38}
	I0417 18:11:26.934874   94733 main.go:141] libmachine: (ha-833724-m04) DBG | domain ha-833724-m04 has defined IP address 192.168.39.67 and MAC address 52:54:00:72:00:38 in network mk-ha-833724
	I0417 18:11:26.935013   94733 host.go:66] Checking if "ha-833724-m04" exists ...
	I0417 18:11:26.935320   94733 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:11:26.935371   94733 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:11:26.949756   94733 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39173
	I0417 18:11:26.950176   94733 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:11:26.950584   94733 main.go:141] libmachine: Using API Version  1
	I0417 18:11:26.950628   94733 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:11:26.950927   94733 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:11:26.951133   94733 main.go:141] libmachine: (ha-833724-m04) Calling .DriverName
	I0417 18:11:26.951330   94733 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:11:26.951357   94733 main.go:141] libmachine: (ha-833724-m04) Calling .GetSSHHostname
	I0417 18:11:26.954135   94733 main.go:141] libmachine: (ha-833724-m04) DBG | domain ha-833724-m04 has defined MAC address 52:54:00:72:00:38 in network mk-ha-833724
	I0417 18:11:26.954486   94733 main.go:141] libmachine: (ha-833724-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:00:38", ip: ""} in network mk-ha-833724: {Iface:virbr1 ExpiryTime:2024-04-17 19:09:10 +0000 UTC Type:0 Mac:52:54:00:72:00:38 Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:ha-833724-m04 Clientid:01:52:54:00:72:00:38}
	I0417 18:11:26.954512   94733 main.go:141] libmachine: (ha-833724-m04) DBG | domain ha-833724-m04 has defined IP address 192.168.39.67 and MAC address 52:54:00:72:00:38 in network mk-ha-833724
	I0417 18:11:26.954673   94733 main.go:141] libmachine: (ha-833724-m04) Calling .GetSSHPort
	I0417 18:11:26.954865   94733 main.go:141] libmachine: (ha-833724-m04) Calling .GetSSHKeyPath
	I0417 18:11:26.955029   94733 main.go:141] libmachine: (ha-833724-m04) Calling .GetSSHUsername
	I0417 18:11:26.955199   94733 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75265/.minikube/machines/ha-833724-m04/id_rsa Username:docker}
	I0417 18:11:27.037821   94733 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:11:27.056535   94733 status.go:257] ha-833724-m04 status: &{Name:ha-833724-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (92.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-833724 node start m02 -v=7 --alsologtostderr: (43.484524473s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (439.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-833724 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-833724 -v=7 --alsologtostderr
E0417 18:12:15.106308   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:12:55.604068   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:14:31.260737   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:14:58.947595   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-833724 -v=7 --alsologtostderr: (4m38.725564125s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-833724 --wait=true -v=7 --alsologtostderr
E0417 18:17:55.603574   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:19:18.649452   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:19:31.260992   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-833724 --wait=true -v=7 --alsologtostderr: (2m41.079349589s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-833724
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (439.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-833724 node delete m03 -v=7 --alsologtostderr: (7.364893327s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (276.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 stop -v=7 --alsologtostderr
E0417 18:22:55.604110   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-833724 stop -v=7 --alsologtostderr: (4m36.420312603s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-833724 status -v=7 --alsologtostderr: exit status 7 (121.280364ms)

                                                
                                                
-- stdout --
	ha-833724
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833724-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833724-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:24:17.437313   98237 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:24:17.437430   98237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:24:17.437437   98237 out.go:304] Setting ErrFile to fd 2...
	I0417 18:24:17.437441   98237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:24:17.437639   98237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
	I0417 18:24:17.437817   98237 out.go:298] Setting JSON to false
	I0417 18:24:17.437844   98237 mustload.go:65] Loading cluster: ha-833724
	I0417 18:24:17.437941   98237 notify.go:220] Checking for updates...
	I0417 18:24:17.438204   98237 config.go:182] Loaded profile config "ha-833724": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
	I0417 18:24:17.438221   98237 status.go:255] checking status of ha-833724 ...
	I0417 18:24:17.438639   98237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:24:17.438722   98237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:24:17.458647   98237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43151
	I0417 18:24:17.459085   98237 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:24:17.459667   98237 main.go:141] libmachine: Using API Version  1
	I0417 18:24:17.459687   98237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:24:17.460066   98237 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:24:17.460271   98237 main.go:141] libmachine: (ha-833724) Calling .GetState
	I0417 18:24:17.461939   98237 status.go:330] ha-833724 host status = "Stopped" (err=<nil>)
	I0417 18:24:17.461951   98237 status.go:343] host is not running, skipping remaining checks
	I0417 18:24:17.461957   98237 status.go:257] ha-833724 status: &{Name:ha-833724 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:24:17.461992   98237 status.go:255] checking status of ha-833724-m02 ...
	I0417 18:24:17.462262   98237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:24:17.462300   98237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:24:17.476998   98237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0417 18:24:17.477367   98237 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:24:17.477855   98237 main.go:141] libmachine: Using API Version  1
	I0417 18:24:17.477878   98237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:24:17.478192   98237 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:24:17.478381   98237 main.go:141] libmachine: (ha-833724-m02) Calling .GetState
	I0417 18:24:17.479766   98237 status.go:330] ha-833724-m02 host status = "Stopped" (err=<nil>)
	I0417 18:24:17.479782   98237 status.go:343] host is not running, skipping remaining checks
	I0417 18:24:17.479788   98237 status.go:257] ha-833724-m02 status: &{Name:ha-833724-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:24:17.479804   98237 status.go:255] checking status of ha-833724-m04 ...
	I0417 18:24:17.480097   98237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:24:17.480138   98237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:24:17.494477   98237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44311
	I0417 18:24:17.494872   98237 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:24:17.495339   98237 main.go:141] libmachine: Using API Version  1
	I0417 18:24:17.495358   98237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:24:17.495689   98237 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:24:17.495874   98237 main.go:141] libmachine: (ha-833724-m04) Calling .GetState
	I0417 18:24:17.497200   98237 status.go:330] ha-833724-m04 host status = "Stopped" (err=<nil>)
	I0417 18:24:17.497212   98237 status.go:343] host is not running, skipping remaining checks
	I0417 18:24:17.497218   98237 status.go:257] ha-833724-m04 status: &{Name:ha-833724-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (276.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (155.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-833724 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0417 18:24:31.261128   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:25:54.308019   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-833724 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m34.647709341s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (155.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-833724 --control-plane -v=7 --alsologtostderr
E0417 18:27:55.603845   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-833724 --control-plane -v=7 --alsologtostderr: (1m14.251076378s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-833724 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-536081 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-536081 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m1.835352273s)
--- PASS: TestJSONOutput/start/Command (61.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-536081 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-536081 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-536081 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-536081 --output=json --user=testUser: (7.354196702s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-738101 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-738101 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.856579ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ef8f1210-9cb0-49fa-b67d-3a4c594dc2b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-738101] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f4b7b71-4001-451f-aa98-4891b00261d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18665"}}
	{"specversion":"1.0","id":"c8d23a5b-2566-4550-b647-d69c9d5b7a0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0deb48a2-e225-4113-8503-9ba78026eda4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig"}}
	{"specversion":"1.0","id":"d80666c3-2b6f-40bb-a234-263b66f767da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube"}}
	{"specversion":"1.0","id":"604f0f87-ad45-4f59-9de7-8c600129ac6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f1dc3576-dda8-4eec-8e1c-3dc1bfeba090","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3a589be9-003d-4931-8fec-d64283a87c15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-738101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-738101
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (93.86s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-913531 --driver=kvm2  --container-runtime=containerd
E0417 18:29:31.261085   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-913531 --driver=kvm2  --container-runtime=containerd: (44.960161694s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-916386 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-916386 --driver=kvm2  --container-runtime=containerd: (45.978118054s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-913531
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-916386
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-916386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-916386
helpers_test.go:175: Cleaning up "first-913531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-913531
--- PASS: TestMinikubeProfile (93.86s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-013438 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-013438 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.447670601s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-013438 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-013438 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-032780 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-032780 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.132395728s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-032780 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-032780 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-013438 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-032780 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-032780 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-032780
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-032780: (1.586077063s)
--- PASS: TestMountStart/serial/Stop (1.59s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-032780
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-032780: (22.143588443s)
--- PASS: TestMountStart/serial/RestartStopped (23.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-032780 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-032780 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-131005 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0417 18:32:55.603049   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-131005 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m43.790904612s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-131005 -- rollout status deployment/busybox: (2.696961175s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- exec busybox-fc5497c4f-6h9j2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- exec busybox-fc5497c4f-7ttrr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- exec busybox-fc5497c4f-6h9j2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- exec busybox-fc5497c4f-7ttrr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- exec busybox-fc5497c4f-6h9j2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- exec busybox-fc5497c4f-7ttrr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- exec busybox-fc5497c4f-6h9j2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- exec busybox-fc5497c4f-6h9j2 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- exec busybox-fc5497c4f-7ttrr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-131005 -- exec busybox-fc5497c4f-7ttrr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-131005 -v 3 --alsologtostderr
E0417 18:34:31.260893   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-131005 -v 3 --alsologtostderr: (41.935867021s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.53s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-131005 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp testdata/cp-test.txt multinode-131005:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp multinode-131005:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile232540657/001/cp-test_multinode-131005.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp multinode-131005:/home/docker/cp-test.txt multinode-131005-m02:/home/docker/cp-test_multinode-131005_multinode-131005-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m02 "sudo cat /home/docker/cp-test_multinode-131005_multinode-131005-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp multinode-131005:/home/docker/cp-test.txt multinode-131005-m03:/home/docker/cp-test_multinode-131005_multinode-131005-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m03 "sudo cat /home/docker/cp-test_multinode-131005_multinode-131005-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp testdata/cp-test.txt multinode-131005-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp multinode-131005-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile232540657/001/cp-test_multinode-131005-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp multinode-131005-m02:/home/docker/cp-test.txt multinode-131005:/home/docker/cp-test_multinode-131005-m02_multinode-131005.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005 "sudo cat /home/docker/cp-test_multinode-131005-m02_multinode-131005.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp multinode-131005-m02:/home/docker/cp-test.txt multinode-131005-m03:/home/docker/cp-test_multinode-131005-m02_multinode-131005-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m03 "sudo cat /home/docker/cp-test_multinode-131005-m02_multinode-131005-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp testdata/cp-test.txt multinode-131005-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp multinode-131005-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile232540657/001/cp-test_multinode-131005-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp multinode-131005-m03:/home/docker/cp-test.txt multinode-131005:/home/docker/cp-test_multinode-131005-m03_multinode-131005.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005 "sudo cat /home/docker/cp-test_multinode-131005-m03_multinode-131005.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 cp multinode-131005-m03:/home/docker/cp-test.txt multinode-131005-m02:/home/docker/cp-test_multinode-131005-m03_multinode-131005-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 ssh -n multinode-131005-m02 "sudo cat /home/docker/cp-test_multinode-131005-m03_multinode-131005-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-131005 node stop m03: (1.553994015s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-131005 status: exit status 7 (439.411585ms)

                                                
                                                
-- stdout --
	multinode-131005
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-131005-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-131005-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-131005 status --alsologtostderr: exit status 7 (444.284071ms)

                                                
                                                
-- stdout --
	multinode-131005
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-131005-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-131005-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:35:05.962700  105044 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:35:05.962926  105044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:35:05.962934  105044 out.go:304] Setting ErrFile to fd 2...
	I0417 18:35:05.962937  105044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:35:05.963141  105044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
	I0417 18:35:05.963303  105044 out.go:298] Setting JSON to false
	I0417 18:35:05.963329  105044 mustload.go:65] Loading cluster: multinode-131005
	I0417 18:35:05.963362  105044 notify.go:220] Checking for updates...
	I0417 18:35:05.963681  105044 config.go:182] Loaded profile config "multinode-131005": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
	I0417 18:35:05.963697  105044 status.go:255] checking status of multinode-131005 ...
	I0417 18:35:05.964056  105044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:35:05.964111  105044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:35:05.984685  105044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43951
	I0417 18:35:05.985125  105044 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:35:05.985818  105044 main.go:141] libmachine: Using API Version  1
	I0417 18:35:05.985840  105044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:35:05.986243  105044 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:35:05.986435  105044 main.go:141] libmachine: (multinode-131005) Calling .GetState
	I0417 18:35:05.987934  105044 status.go:330] multinode-131005 host status = "Running" (err=<nil>)
	I0417 18:35:05.987950  105044 host.go:66] Checking if "multinode-131005" exists ...
	I0417 18:35:05.988334  105044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:35:05.988397  105044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:35:06.003471  105044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34567
	I0417 18:35:06.003880  105044 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:35:06.004289  105044 main.go:141] libmachine: Using API Version  1
	I0417 18:35:06.004312  105044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:35:06.004635  105044 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:35:06.004832  105044 main.go:141] libmachine: (multinode-131005) Calling .GetIP
	I0417 18:35:06.007608  105044 main.go:141] libmachine: (multinode-131005) DBG | domain multinode-131005 has defined MAC address 52:54:00:ae:e1:0e in network mk-multinode-131005
	I0417 18:35:06.008011  105044 main.go:141] libmachine: (multinode-131005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:e1:0e", ip: ""} in network mk-multinode-131005: {Iface:virbr1 ExpiryTime:2024-04-17 19:32:39 +0000 UTC Type:0 Mac:52:54:00:ae:e1:0e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-131005 Clientid:01:52:54:00:ae:e1:0e}
	I0417 18:35:06.008048  105044 main.go:141] libmachine: (multinode-131005) DBG | domain multinode-131005 has defined IP address 192.168.39.159 and MAC address 52:54:00:ae:e1:0e in network mk-multinode-131005
	I0417 18:35:06.008117  105044 host.go:66] Checking if "multinode-131005" exists ...
	I0417 18:35:06.008391  105044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:35:06.008463  105044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:35:06.023023  105044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36821
	I0417 18:35:06.023417  105044 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:35:06.023856  105044 main.go:141] libmachine: Using API Version  1
	I0417 18:35:06.023877  105044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:35:06.024187  105044 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:35:06.024380  105044 main.go:141] libmachine: (multinode-131005) Calling .DriverName
	I0417 18:35:06.024594  105044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:35:06.024615  105044 main.go:141] libmachine: (multinode-131005) Calling .GetSSHHostname
	I0417 18:35:06.026969  105044 main.go:141] libmachine: (multinode-131005) DBG | domain multinode-131005 has defined MAC address 52:54:00:ae:e1:0e in network mk-multinode-131005
	I0417 18:35:06.027322  105044 main.go:141] libmachine: (multinode-131005) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:e1:0e", ip: ""} in network mk-multinode-131005: {Iface:virbr1 ExpiryTime:2024-04-17 19:32:39 +0000 UTC Type:0 Mac:52:54:00:ae:e1:0e Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-131005 Clientid:01:52:54:00:ae:e1:0e}
	I0417 18:35:06.027354  105044 main.go:141] libmachine: (multinode-131005) DBG | domain multinode-131005 has defined IP address 192.168.39.159 and MAC address 52:54:00:ae:e1:0e in network mk-multinode-131005
	I0417 18:35:06.027436  105044 main.go:141] libmachine: (multinode-131005) Calling .GetSSHPort
	I0417 18:35:06.027603  105044 main.go:141] libmachine: (multinode-131005) Calling .GetSSHKeyPath
	I0417 18:35:06.027753  105044 main.go:141] libmachine: (multinode-131005) Calling .GetSSHUsername
	I0417 18:35:06.027901  105044 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75265/.minikube/machines/multinode-131005/id_rsa Username:docker}
	I0417 18:35:06.114011  105044 ssh_runner.go:195] Run: systemctl --version
	I0417 18:35:06.121230  105044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:35:06.136315  105044 kubeconfig.go:125] found "multinode-131005" server: "https://192.168.39.159:8443"
	I0417 18:35:06.136350  105044 api_server.go:166] Checking apiserver status ...
	I0417 18:35:06.136382  105044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0417 18:35:06.158796  105044 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	W0417 18:35:06.169747  105044 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0417 18:35:06.169805  105044 ssh_runner.go:195] Run: ls
	I0417 18:35:06.176337  105044 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0417 18:35:06.180866  105044 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I0417 18:35:06.180889  105044 status.go:422] multinode-131005 apiserver status = Running (err=<nil>)
	I0417 18:35:06.180905  105044 status.go:257] multinode-131005 status: &{Name:multinode-131005 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:35:06.180924  105044 status.go:255] checking status of multinode-131005-m02 ...
	I0417 18:35:06.181206  105044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:35:06.181238  105044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:35:06.196367  105044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46543
	I0417 18:35:06.196727  105044 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:35:06.197223  105044 main.go:141] libmachine: Using API Version  1
	I0417 18:35:06.197246  105044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:35:06.197555  105044 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:35:06.197742  105044 main.go:141] libmachine: (multinode-131005-m02) Calling .GetState
	I0417 18:35:06.199213  105044 status.go:330] multinode-131005-m02 host status = "Running" (err=<nil>)
	I0417 18:35:06.199231  105044 host.go:66] Checking if "multinode-131005-m02" exists ...
	I0417 18:35:06.199511  105044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:35:06.199574  105044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:35:06.214334  105044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39451
	I0417 18:35:06.214726  105044 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:35:06.215195  105044 main.go:141] libmachine: Using API Version  1
	I0417 18:35:06.215218  105044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:35:06.215623  105044 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:35:06.215833  105044 main.go:141] libmachine: (multinode-131005-m02) Calling .GetIP
	I0417 18:35:06.218500  105044 main.go:141] libmachine: (multinode-131005-m02) DBG | domain multinode-131005-m02 has defined MAC address 52:54:00:58:19:f6 in network mk-multinode-131005
	I0417 18:35:06.218982  105044 main.go:141] libmachine: (multinode-131005-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:19:f6", ip: ""} in network mk-multinode-131005: {Iface:virbr1 ExpiryTime:2024-04-17 19:33:41 +0000 UTC Type:0 Mac:52:54:00:58:19:f6 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:multinode-131005-m02 Clientid:01:52:54:00:58:19:f6}
	I0417 18:35:06.219012  105044 main.go:141] libmachine: (multinode-131005-m02) DBG | domain multinode-131005-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:58:19:f6 in network mk-multinode-131005
	I0417 18:35:06.219145  105044 host.go:66] Checking if "multinode-131005-m02" exists ...
	I0417 18:35:06.219445  105044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:35:06.219487  105044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:35:06.234385  105044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34491
	I0417 18:35:06.234757  105044 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:35:06.235191  105044 main.go:141] libmachine: Using API Version  1
	I0417 18:35:06.235218  105044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:35:06.235560  105044 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:35:06.235751  105044 main.go:141] libmachine: (multinode-131005-m02) Calling .DriverName
	I0417 18:35:06.235984  105044 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0417 18:35:06.236006  105044 main.go:141] libmachine: (multinode-131005-m02) Calling .GetSSHHostname
	I0417 18:35:06.238825  105044 main.go:141] libmachine: (multinode-131005-m02) DBG | domain multinode-131005-m02 has defined MAC address 52:54:00:58:19:f6 in network mk-multinode-131005
	I0417 18:35:06.239284  105044 main.go:141] libmachine: (multinode-131005-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:19:f6", ip: ""} in network mk-multinode-131005: {Iface:virbr1 ExpiryTime:2024-04-17 19:33:41 +0000 UTC Type:0 Mac:52:54:00:58:19:f6 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:multinode-131005-m02 Clientid:01:52:54:00:58:19:f6}
	I0417 18:35:06.239313  105044 main.go:141] libmachine: (multinode-131005-m02) DBG | domain multinode-131005-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:58:19:f6 in network mk-multinode-131005
	I0417 18:35:06.239478  105044 main.go:141] libmachine: (multinode-131005-m02) Calling .GetSSHPort
	I0417 18:35:06.239654  105044 main.go:141] libmachine: (multinode-131005-m02) Calling .GetSSHKeyPath
	I0417 18:35:06.239820  105044 main.go:141] libmachine: (multinode-131005-m02) Calling .GetSSHUsername
	I0417 18:35:06.239998  105044 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18665-75265/.minikube/machines/multinode-131005-m02/id_rsa Username:docker}
	I0417 18:35:06.316502  105044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0417 18:35:06.333050  105044 status.go:257] multinode-131005-m02 status: &{Name:multinode-131005-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:35:06.333092  105044 status.go:255] checking status of multinode-131005-m03 ...
	I0417 18:35:06.333443  105044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:35:06.333489  105044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:35:06.348333  105044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I0417 18:35:06.348713  105044 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:35:06.349227  105044 main.go:141] libmachine: Using API Version  1
	I0417 18:35:06.349277  105044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:35:06.349621  105044 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:35:06.349798  105044 main.go:141] libmachine: (multinode-131005-m03) Calling .GetState
	I0417 18:35:06.351347  105044 status.go:330] multinode-131005-m03 host status = "Stopped" (err=<nil>)
	I0417 18:35:06.351360  105044 status.go:343] host is not running, skipping remaining checks
	I0417 18:35:06.351366  105044 status.go:257] multinode-131005-m03 status: &{Name:multinode-131005-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-131005 node start m03 -v=7 --alsologtostderr: (25.95346334s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (300.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-131005
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-131005
E0417 18:35:58.650038   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:37:55.603219   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-131005: (3m5.503499724s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-131005 --wait=true -v=8 --alsologtostderr
E0417 18:39:31.261488   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-131005 --wait=true -v=8 --alsologtostderr: (1m54.694460157s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-131005
--- PASS: TestMultiNode/serial/RestartKeepsNodes (300.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-131005 node delete m03: (1.82198007s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 stop
E0417 18:42:34.311099   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
E0417 18:42:55.603720   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-131005 stop: (3m3.935369105s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-131005 status: exit status 7 (93.476586ms)

                                                
                                                
-- stdout --
	multinode-131005
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-131005-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-131005 status --alsologtostderr: exit status 7 (97.673487ms)

                                                
                                                
-- stdout --
	multinode-131005
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-131005-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:43:39.731908  107199 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:43:39.732019  107199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:43:39.732029  107199 out.go:304] Setting ErrFile to fd 2...
	I0417 18:43:39.732033  107199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:43:39.732250  107199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
	I0417 18:43:39.732421  107199 out.go:298] Setting JSON to false
	I0417 18:43:39.732449  107199 mustload.go:65] Loading cluster: multinode-131005
	I0417 18:43:39.732561  107199 notify.go:220] Checking for updates...
	I0417 18:43:39.732837  107199 config.go:182] Loaded profile config "multinode-131005": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
	I0417 18:43:39.732852  107199 status.go:255] checking status of multinode-131005 ...
	I0417 18:43:39.733249  107199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:43:39.733305  107199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:43:39.753510  107199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38503
	I0417 18:43:39.753944  107199 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:43:39.754494  107199 main.go:141] libmachine: Using API Version  1
	I0417 18:43:39.754519  107199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:43:39.754929  107199 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:43:39.755140  107199 main.go:141] libmachine: (multinode-131005) Calling .GetState
	I0417 18:43:39.756780  107199 status.go:330] multinode-131005 host status = "Stopped" (err=<nil>)
	I0417 18:43:39.756794  107199 status.go:343] host is not running, skipping remaining checks
	I0417 18:43:39.756801  107199 status.go:257] multinode-131005 status: &{Name:multinode-131005 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0417 18:43:39.756840  107199 status.go:255] checking status of multinode-131005-m02 ...
	I0417 18:43:39.757183  107199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0417 18:43:39.757230  107199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0417 18:43:39.771845  107199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42251
	I0417 18:43:39.772215  107199 main.go:141] libmachine: () Calling .GetVersion
	I0417 18:43:39.772664  107199 main.go:141] libmachine: Using API Version  1
	I0417 18:43:39.772688  107199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0417 18:43:39.773025  107199 main.go:141] libmachine: () Calling .GetMachineName
	I0417 18:43:39.773210  107199 main.go:141] libmachine: (multinode-131005-m02) Calling .GetState
	I0417 18:43:39.774655  107199 status.go:330] multinode-131005-m02 host status = "Stopped" (err=<nil>)
	I0417 18:43:39.774667  107199 status.go:343] host is not running, skipping remaining checks
	I0417 18:43:39.774673  107199 status.go:257] multinode-131005-m02 status: &{Name:multinode-131005-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-131005 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0417 18:44:31.261404   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-131005 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m25.512707789s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-131005 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-131005
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-131005-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-131005-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (75.116661ms)

                                                
                                                
-- stdout --
	* [multinode-131005-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-131005-m02' is duplicated with machine name 'multinode-131005-m02' in profile 'multinode-131005'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-131005-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-131005-m03 --driver=kvm2  --container-runtime=containerd: (49.434006357s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-131005
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-131005: exit status 80 (228.949934ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-131005 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-131005-m03 already exists in multinode-131005-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-131005-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.59s)

                                                
                                    
x
+
TestPreload (270.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-930615 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0417 18:47:55.603325   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-930615 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m3.232605302s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-930615 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-930615 image pull gcr.io/k8s-minikube/busybox: (1.572296747s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-930615
E0417 18:49:31.260872   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-930615: (1m31.750984654s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-930615 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-930615 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (52.612558507s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-930615 image list
helpers_test.go:175: Cleaning up "test-preload-930615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-930615
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-930615: (1.056806352s)
--- PASS: TestPreload (270.45s)

                                                
                                    
x
+
TestScheduledStopUnix (120.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-270519 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-270519 --memory=2048 --driver=kvm2  --container-runtime=containerd: (48.870781084s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270519 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-270519 -n scheduled-stop-270519
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270519 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270519 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-270519 -n scheduled-stop-270519
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-270519
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270519 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-270519
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-270519: exit status 7 (75.239887ms)

                                                
                                                
-- stdout --
	scheduled-stop-270519
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-270519 -n scheduled-stop-270519
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-270519 -n scheduled-stop-270519: exit status 7 (75.450221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-270519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-270519
--- PASS: TestScheduledStopUnix (120.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (211.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1526248757 start -p running-upgrade-378702 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0417 18:52:38.651150   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 18:52:55.603942   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1526248757 start -p running-upgrade-378702 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m14.689298419s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-378702 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-378702 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m14.797094053s)
helpers_test.go:175: Cleaning up "running-upgrade-378702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-378702
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-378702: (1.536691464s)
--- PASS: TestRunningBinaryUpgrade (211.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (179.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-398735 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-398735 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m28.981411587s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-398735
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-398735: (2.334465215s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-398735 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-398735 status --format={{.Host}}: exit status 7 (90.034745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-398735 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-398735 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (53.008260469s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-398735 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-398735 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-398735 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (140.576557ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-398735] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-398735
	    minikube start -p kubernetes-upgrade-398735 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3987352 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-398735 --kubernetes-version=v1.30.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-398735 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-398735 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (33.582304079s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-398735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-398735
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-398735: (1.212503041s)
--- PASS: TestKubernetesUpgrade (179.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-366104 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-366104 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (99.134941ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-366104] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (100.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-366104 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-366104 --driver=kvm2  --container-runtime=containerd: (1m40.648350295s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-366104 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (100.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (46.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-366104 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-366104 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (45.21905279s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-366104 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-366104 status -o json: exit status 2 (253.911763ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-366104","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-366104
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (46.31s)

                                                
                                    
x
+
TestPause/serial/Start (89.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-793627 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
E0417 18:54:31.260426   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-793627 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m29.653117942s)
--- PASS: TestPause/serial/Start (89.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (35.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-366104 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-366104 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (35.845558169s)
--- PASS: TestNoKubernetes/serial/Start (35.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-366104 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-366104 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.4616ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.446741756s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (54.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-793627 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-793627 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (54.131498605s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (54.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-366104
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-366104: (1.498675344s)
--- PASS: TestNoKubernetes/serial/Stop (1.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (30.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-366104 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-366104 --driver=kvm2  --container-runtime=containerd: (30.756797306s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (30.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-761042 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-761042 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (122.532679ms)

                                                
                                                
-- stdout --
	* [false-761042] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18665
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0417 18:56:05.227867  114025 out.go:291] Setting OutFile to fd 1 ...
	I0417 18:56:05.228038  114025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:05.228049  114025 out.go:304] Setting ErrFile to fd 2...
	I0417 18:56:05.228055  114025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0417 18:56:05.228270  114025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18665-75265/.minikube/bin
	I0417 18:56:05.229020  114025 out.go:298] Setting JSON to false
	I0417 18:56:05.230110  114025 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9515,"bootTime":1713370650,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0417 18:56:05.230178  114025 start.go:139] virtualization: kvm guest
	I0417 18:56:05.234768  114025 out.go:177] * [false-761042] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0417 18:56:05.236480  114025 out.go:177]   - MINIKUBE_LOCATION=18665
	I0417 18:56:05.238103  114025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0417 18:56:05.236510  114025 notify.go:220] Checking for updates...
	I0417 18:56:05.239797  114025 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18665-75265/kubeconfig
	I0417 18:56:05.241252  114025 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18665-75265/.minikube
	I0417 18:56:05.242663  114025 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0417 18:56:05.244083  114025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0417 18:56:05.245810  114025 config.go:182] Loaded profile config "NoKubernetes-366104": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0417 18:56:05.245925  114025 config.go:182] Loaded profile config "force-systemd-flag-455104": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
	I0417 18:56:05.246050  114025 config.go:182] Loaded profile config "pause-793627": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0-rc.2
	I0417 18:56:05.246157  114025 driver.go:392] Setting default libvirt URI to qemu:///system
	I0417 18:56:05.285081  114025 out.go:177] * Using the kvm2 driver based on user configuration
	I0417 18:56:05.286465  114025 start.go:297] selected driver: kvm2
	I0417 18:56:05.286480  114025 start.go:901] validating driver "kvm2" against <nil>
	I0417 18:56:05.286496  114025 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0417 18:56:05.288616  114025 out.go:177] 
	W0417 18:56:05.289777  114025 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0417 18:56:05.291237  114025 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-761042 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-761042" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-761042" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18665-75265/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Apr 2024 18:55:38 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.17:8443
name: pause-793627
contexts:
- context:
cluster: pause-793627
extensions:
- extension:
last-update: Wed, 17 Apr 2024 18:55:38 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: pause-793627
name: pause-793627
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-793627
user:
client-certificate: /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/pause-793627/client.crt
client-key: /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/pause-793627/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-761042

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-761042"

                                                
                                                
----------------------- debugLogs end: false-761042 [took: 3.982953048s] --------------------------------
helpers_test.go:175: Cleaning up "false-761042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-761042
--- PASS: TestNetworkPlugins/group/false (4.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (192.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2728589566 start -p stopped-upgrade-210046 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2728589566 start -p stopped-upgrade-210046 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m23.145233292s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2728589566 -p stopped-upgrade-210046 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2728589566 -p stopped-upgrade-210046 stop: (1.413094241s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-210046 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-210046 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m48.384455186s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (192.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-366104 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-366104 "sudo systemctl is-active --quiet service kubelet": exit status 1 (237.388652ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-793627 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-793627 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-793627 --output=json --layout=cluster: exit status 2 (252.281363ms)

                                                
                                                
-- stdout --
	{"Name":"pause-793627","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-793627","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-793627 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-793627 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-793627 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.11s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (196.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-819965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-819965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m16.055976253s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (196.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (151.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-924348 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-924348 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (2m31.699179266s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (151.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-210046
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-210046: (1.162420386s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (114.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-684118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-684118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (1m54.044623277s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (114.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-819965 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c4457866-2131-4f09-9179-9171b4f24352] Pending
helpers_test.go:344: "busybox" [c4457866-2131-4f09-9179-9171b4f24352] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c4457866-2131-4f09-9179-9171b4f24352] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003642373s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-819965 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-924348 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [56bca675-a0ea-4423-8703-1e7f552199ab] Pending
helpers_test.go:344: "busybox" [56bca675-a0ea-4423-8703-1e7f552199ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [56bca675-a0ea-4423-8703-1e7f552199ab] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004911001s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-924348 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-819965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-819965 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (93.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-819965 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-819965 --alsologtostderr -v=3: (1m33.499076573s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (93.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-924348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-924348 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-924348 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-924348 --alsologtostderr -v=3: (1m32.472526342s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-684118 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0229ab89-e4d3-42af-91d1-27d745ad6fd0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0229ab89-e4d3-42af-91d1-27d745ad6fd0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004908102s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-684118 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-750222 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-750222 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (1m0.281887892s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-684118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-684118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.093626483s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-684118 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-684118 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-684118 --alsologtostderr -v=3: (1m32.479025075s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-819965 -n old-k8s-version-819965
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-819965 -n old-k8s-version-819965: exit status 7 (86.833315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-819965 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (485.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-819965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-819965 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (8m5.609301852s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-819965 -n old-k8s-version-819965
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (485.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-924348 -n embed-certs-924348
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-924348 -n embed-certs-924348: exit status 7 (84.070912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-924348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (336.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-924348 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-924348 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (5m36.319467076s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-924348 -n embed-certs-924348
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (336.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-750222 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b26c0751-601b-4a45-83b8-34fce75a48e7] Pending
helpers_test.go:344: "busybox" [b26c0751-601b-4a45-83b8-34fce75a48e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b26c0751-601b-4a45-83b8-34fce75a48e7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004468708s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-750222 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-750222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-750222 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-750222 --alsologtostderr -v=3
E0417 19:02:55.603627   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-750222 --alsologtostderr -v=3: (1m32.482664041s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-684118 -n no-preload-684118
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-684118 -n no-preload-684118: exit status 7 (75.153355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-684118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (316.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-684118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-684118 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (5m16.382219688s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-684118 -n no-preload-684118
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (316.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-750222 -n default-k8s-diff-port-750222
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-750222 -n default-k8s-diff-port-750222: exit status 7 (85.341171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-750222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (319.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-750222 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
E0417 19:04:31.260549   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-750222 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (5m19.231335615s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-750222 -n default-k8s-diff-port-750222
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (319.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-sckpv" [979e6c6e-ce28-47d1-b4a4-8f84c6d571b6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0417 19:07:55.603072   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-779776cb65-sckpv" [979e6c6e-ce28-47d1-b4a4-8f84c6d571b6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.005309304s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-sckpv" [979e6c6e-ce28-47d1-b4a4-8f84c6d571b6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004606001s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-924348 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-924348 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-924348 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-924348 -n embed-certs-924348
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-924348 -n embed-certs-924348: exit status 2 (279.143703ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-924348 -n embed-certs-924348
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-924348 -n embed-certs-924348: exit status 2 (280.975532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-924348 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-924348 -n embed-certs-924348
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-924348 -n embed-certs-924348
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-218911 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-218911 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (1m5.696850452s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-slrrq" [1c965045-2b8b-4eda-80c1-8f2d1afc632c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-slrrq" [1c965045-2b8b-4eda-80c1-8f2d1afc632c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.005828452s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-slrrq" [1c965045-2b8b-4eda-80c1-8f2d1afc632c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005131757s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-684118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-684118 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-684118 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-684118 --alsologtostderr -v=1: (1.095099234s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-684118 -n no-preload-684118
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-684118 -n no-preload-684118: exit status 2 (311.965641ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-684118 -n no-preload-684118
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-684118 -n no-preload-684118: exit status 2 (286.655024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-684118 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-684118 --alsologtostderr -v=1: (1.264158818s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-684118 -n no-preload-684118
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-684118 -n no-preload-684118
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (64.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0417 19:09:18.651694   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m4.925743797s)
--- PASS: TestNetworkPlugins/group/auto/Start (64.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-218911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-218911 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.251570881s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-218911 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-218911 --alsologtostderr -v=3: (2.367647599s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218911 -n newest-cni-218911
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218911 -n newest-cni-218911: exit status 7 (87.835784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-218911 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-218911 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2
E0417 19:09:31.260897   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-218911 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0-rc.2: (40.414755632s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-218911 -n newest-cni-218911
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7p6bn" [0ca707f4-f466-4f91-ad0b-4067bf6c855d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7p6bn" [0ca707f4-f466-4f91-ad0b-4067bf6c855d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.010313993s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7p6bn" [0ca707f4-f466-4f91-ad0b-4067bf6c855d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012311823s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-750222 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-750222 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-750222 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-750222 --alsologtostderr -v=1: (1.017504502s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-750222 -n default-k8s-diff-port-750222
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-750222 -n default-k8s-diff-port-750222: exit status 2 (319.877905ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-750222 -n default-k8s-diff-port-750222
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-750222 -n default-k8s-diff-port-750222: exit status 2 (301.693304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-750222 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-750222 -n default-k8s-diff-port-750222
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-750222 -n default-k8s-diff-port-750222
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-761042 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-761042 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-szmpz" [3b46d44d-ad73-4f74-8802-200525c6e653] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-szmpz" [3b46d44d-ad73-4f74-8802-200525c6e653] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005600444s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m11.30753104s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-218911 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-218911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-218911 -n newest-cni-218911
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-218911 -n newest-cni-218911: exit status 2 (279.632609ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-218911 -n newest-cni-218911
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-218911 -n newest-cni-218911: exit status 2 (280.167288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-218911 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-218911 -n newest-cni-218911
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-218911 -n newest-cni-218911
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (117.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m57.38011834s)
--- PASS: TestNetworkPlugins/group/calico/Start (117.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (26.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-761042 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-761042 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.160156175s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-761042 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context auto-761042 exec deployment/netcat -- nslookup kubernetes.default: (10.165855278s)
--- PASS: TestNetworkPlugins/group/auto/DNS (26.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xcw6h" [6a2f1e1c-f6e9-4552-a5dc-a6f0aa60f386] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004501975s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xcw6h" [6a2f1e1c-f6e9-4552-a5dc-a6f0aa60f386] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004761358s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-819965 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-819965 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-819965 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-819965 -n old-k8s-version-819965
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-819965 -n old-k8s-version-819965: exit status 2 (255.546705ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-819965 -n old-k8s-version-819965
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-819965 -n old-k8s-version-819965: exit status 2 (249.03242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-819965 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-819965 -n old-k8s-version-819965
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-819965 -n old-k8s-version-819965
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (117.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m57.210083141s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (117.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (134.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m14.815892794s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (134.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wr2pz" [c5110da9-89fe-4846-a99e-6f6f03768015] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006335905s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-761042 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-761042 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8mjkv" [14563785-0ba5-4990-9fa1-8e1a5366cf5c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-8mjkv" [14563785-0ba5-4990-9fa1-8e1a5366cf5c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005169853s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-761042 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (90.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0417 19:11:50.011981   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/no-preload-684118/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m30.105236909s)
--- PASS: TestNetworkPlugins/group/flannel/Start (90.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-r87gp" [c5e54b2e-f0c0-4679-a841-7ff83c7ba5de] Running
E0417 19:12:10.493201   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/no-preload-684118/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005687168s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-761042 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-761042 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xmzm9" [56a19b6b-f8d8-4df5-8fc3-505f089d1964] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xmzm9" [56a19b6b-f8d8-4df5-8fc3-505f089d1964] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.007933447s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-761042 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-761042 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-761042 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2k968" [0a44c26e-a9e1-4217-aafb-c336abc26cc5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2k968" [0a44c26e-a9e1-4217-aafb-c336abc26cc5] Running
E0417 19:12:36.098042   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
E0417 19:12:36.103901   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
E0417 19:12:36.114198   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
E0417 19:12:36.134915   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
E0417 19:12:36.175598   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
E0417 19:12:36.255902   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
E0417 19:12:36.416487   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
E0417 19:12:36.736776   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
E0417 19:12:37.377320   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
E0417 19:12:38.657766   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005622978s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-761042 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (102.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
E0417 19:12:46.339872   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
E0417 19:12:51.453511   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/no-preload-684118/client.crt: no such file or directory
E0417 19:12:55.603509   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/addons-526030/client.crt: no such file or directory
E0417 19:12:56.580890   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-761042 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m42.843829795s)
--- PASS: TestNetworkPlugins/group/bridge/Start (102.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-761042 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-761042 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-t7fd8" [b3553bbf-7029-4d8f-88a9-a989d97c9d37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-t7fd8" [b3553bbf-7029-4d8f-88a9-a989d97c9d37] Running
E0417 19:13:17.061562   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/default-k8s-diff-port-750222/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004948426s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-761042 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nzrk4" [cd6e34a3-7060-4542-8928-5700fb464701] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005923934s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-761042 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-761042 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5tgps" [50f87fc3-838f-489c-8999-248bfde47c2f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5tgps" [50f87fc3-838f-489c-8999-248bfde47c2f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004699444s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-761042 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-761042 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-761042 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bcn78" [c376d06c-59f0-4578-bf52-f1dc0b7fd32f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bcn78" [c376d06c-59f0-4578-bf52-f1dc0b7fd32f] Running
E0417 19:14:31.260771   82524 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/functional-366561/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004892978s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-761042 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-761042 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (36/325)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0-rc.2/cached-images 0
15 TestDownloadOnly/v1.30.0-rc.2/binaries 0
16 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
115 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
116 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
118 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
258 TestStartStop/group/disable-driver-mounts 0.16
270 TestNetworkPlugins/group/kubenet 4.51
278 TestNetworkPlugins/group/cilium 4.09
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-226975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-226975
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-761042 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-761042" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-761042" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18665-75265/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Apr 2024 18:55:38 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.17:8443
name: pause-793627
contexts:
- context:
cluster: pause-793627
extensions:
- extension:
last-update: Wed, 17 Apr 2024 18:55:38 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: pause-793627
name: pause-793627
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-793627
user:
client-certificate: /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/pause-793627/client.crt
client-key: /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/pause-793627/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-761042

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-761042"

                                                
                                                
----------------------- debugLogs end: kubenet-761042 [took: 4.346749942s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-761042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-761042
--- SKIP: TestNetworkPlugins/group/kubenet (4.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-761042 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-761042" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18665-75265/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Apr 2024 18:55:38 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.39.17:8443
name: pause-793627
contexts:
- context:
cluster: pause-793627
extensions:
- extension:
last-update: Wed, 17 Apr 2024 18:55:38 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: pause-793627
name: pause-793627
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-793627
user:
client-certificate: /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/pause-793627/client.crt
client-key: /home/jenkins/minikube-integration/18665-75265/.minikube/profiles/pause-793627/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-761042

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-761042" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-761042"

                                                
                                                
----------------------- debugLogs end: cilium-761042 [took: 3.935597111s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-761042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-761042
--- SKIP: TestNetworkPlugins/group/cilium (4.09s)

                                                
                                    
Copied to clipboard