Test Report: KVM_Linux_containerd 19355

                    
                      6d23947514fd7a389789fed180382829b6444229:2024-07-31:35588
                    
                

Test fail (1/334)

Order failed test Duration
104 TestFunctional/parallel/ServiceCmdConnect 30.39
x
+
TestFunctional/parallel/ServiceCmdConnect (30.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-406825 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-406825 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-6jtbn" [0a435882-bbd8-4ef6-afbe-d7398712d43b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-6jtbn" [0a435882-bbd8-4ef6-afbe-d7398712d43b] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005877582s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.243:31342
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1657: error fetching http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.39.243:31342: Get "http://192.168.39.243:31342": dial tcp 192.168.39.243:31342: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-406825 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-57b4589c47-6jtbn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-406825/192.168.39.243
Start Time:       Wed, 31 Jul 2024 19:38:03 +0000
Labels:           app=hello-node-connect
pod-template-hash=57b4589c47
Annotations:      <none>
Status:           Running
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-57b4589c47
Containers:
echoserver:
Container ID:   containerd://79805f790046fbb6971a630e4b001268ed7ff79519a46fc8e02a4d84a36de0ea
Image:          registry.k8s.io/echoserver:1.8
Image ID:       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
Port:           <none>
Host Port:      <none>
State:          Running
Started:      Wed, 31 Jul 2024 19:38:05 +0000
Ready:          True
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r6qpn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       True 
ContainersReady             True 
PodScheduled                True 
Volumes:
kube-api-access-r6qpn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type    Reason     Age   From               Message
----    ------     ----  ----               -------
Normal  Scheduled  27s   default-scheduler  Successfully assigned default/hello-node-connect-57b4589c47-6jtbn to functional-406825
Normal  Pulling    27s   kubelet            Pulling image "registry.k8s.io/echoserver:1.8"
Normal  Pulled     25s   kubelet            Successfully pulled image "registry.k8s.io/echoserver:1.8" in 143ms (1.513s including waiting). Image size: 46237695 bytes.
Normal  Created    25s   kubelet            Created container echoserver
Normal  Started    25s   kubelet            Started container echoserver

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-406825 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-406825 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.244.63
IPs:                      10.101.244.63
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31342/TCP
Endpoints:                10.244.0.6:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-406825 -n functional-406825
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 logs -n 25: (1.714595788s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                     Args                                      |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-406825 ssh sudo cat                                                | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | /etc/ssl/certs/6241492.pem                                                    |                   |         |         |                     |                     |
	| ssh            | functional-406825 ssh sudo cat                                                | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | /usr/share/ca-certificates/6241492.pem                                        |                   |         |         |                     |                     |
	| image          | functional-406825 image load --daemon                                         | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | docker.io/kicbase/echo-server:functional-406825                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| ssh            | functional-406825 ssh sudo cat                                                | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                                     |                   |         |         |                     |                     |
	| ssh            | functional-406825 ssh sudo cat                                                | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | /etc/test/nested/copy/624149/hosts                                            |                   |         |         |                     |                     |
	| image          | functional-406825 image ls                                                    | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	| image          | functional-406825 image load --daemon                                         | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | docker.io/kicbase/echo-server:functional-406825                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| image          | functional-406825 image ls                                                    | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	| image          | functional-406825 image load --daemon                                         | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | docker.io/kicbase/echo-server:functional-406825                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| image          | functional-406825 image ls                                                    | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	| image          | functional-406825 image save docker.io/kicbase/echo-server:functional-406825  | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| image          | functional-406825 image rm                                                    | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | docker.io/kicbase/echo-server:functional-406825                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| image          | functional-406825 image ls                                                    | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	| image          | functional-406825 image load                                                  | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| image          | functional-406825 image ls                                                    | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	| image          | functional-406825 image save --daemon                                         | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | docker.io/kicbase/echo-server:functional-406825                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| update-context | functional-406825                                                             | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | update-context                                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                        |                   |         |         |                     |                     |
	| update-context | functional-406825                                                             | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | update-context                                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                        |                   |         |         |                     |                     |
	| update-context | functional-406825                                                             | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | update-context                                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                        |                   |         |         |                     |                     |
	| image          | functional-406825                                                             | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | image ls --format short                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| image          | functional-406825                                                             | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | image ls --format yaml                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                             |                   |         |         |                     |                     |
	| ssh            | functional-406825 ssh pgrep                                                   | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC |                     |
	|                | buildkitd                                                                     |                   |         |         |                     |                     |
	| image          | functional-406825 image build -t                                              | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|                | localhost/my-image:functional-406825                                          |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                              |                   |         |         |                     |                     |
	| image          | functional-406825 image ls                                                    | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	| image          | functional-406825                                                             | functional-406825 | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC |                     |
	|                | image ls --format json                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                             |                   |         |         |                     |                     |
	|----------------|-------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:38:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:38:03.039711  631651 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:38:03.040048  631651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:38:03.040060  631651 out.go:304] Setting ErrFile to fd 2...
	I0731 19:38:03.040065  631651 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:38:03.040346  631651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
	I0731 19:38:03.040961  631651 out.go:298] Setting JSON to false
	I0731 19:38:03.042446  631651 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12027,"bootTime":1722442656,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:38:03.042586  631651 start.go:139] virtualization: kvm guest
	I0731 19:38:03.044331  631651 out.go:177] * [functional-406825] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:38:03.046231  631651 notify.go:220] Checking for updates...
	I0731 19:38:03.046277  631651 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 19:38:03.047920  631651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:38:03.049520  631651 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	I0731 19:38:03.050874  631651 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	I0731 19:38:03.052002  631651 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:38:03.053222  631651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:38:03.055064  631651 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0731 19:38:03.055594  631651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:38:03.055643  631651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:38:03.072556  631651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42589
	I0731 19:38:03.073006  631651 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:38:03.073572  631651 main.go:141] libmachine: Using API Version  1
	I0731 19:38:03.073595  631651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:38:03.073936  631651 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:38:03.074119  631651 main.go:141] libmachine: (functional-406825) Calling .DriverName
	I0731 19:38:03.074369  631651 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:38:03.074667  631651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:38:03.074705  631651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:38:03.090218  631651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0731 19:38:03.090701  631651 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:38:03.091249  631651 main.go:141] libmachine: Using API Version  1
	I0731 19:38:03.091277  631651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:38:03.091652  631651 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:38:03.091850  631651 main.go:141] libmachine: (functional-406825) Calling .DriverName
	I0731 19:38:03.129855  631651 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 19:38:03.131195  631651 start.go:297] selected driver: kvm2
	I0731 19:38:03.131215  631651 start.go:901] validating driver "kvm2" against &{Name:functional-406825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-406825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:38:03.131357  631651 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:38:03.132528  631651 cni.go:84] Creating CNI manager for ""
	I0731 19:38:03.132546  631651 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0731 19:38:03.132596  631651 start.go:340] cluster config:
	{Name:functional-406825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-406825 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:38:03.134658  631651 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	d595e113e36db       5107333e08a87       2 seconds ago        Running             mysql                       0                   2a15baf358afa       mysql-64454c8b5c-w77r4
	b83ebda98a07f       6e38f40d628db       12 seconds ago       Running             storage-provisioner         4                   1c87913d6fc6b       storage-provisioner
	5a4384c31025e       115053965e86b       16 seconds ago       Running             dashboard-metrics-scraper   0                   ffc1c313f021e       dashboard-metrics-scraper-b5fc48f67-c7rw5
	47999e35dc249       07655ddf2eebe       18 seconds ago       Running             kubernetes-dashboard        0                   cad990844a166       kubernetes-dashboard-779776cb65-8c8zn
	faafbb13ccfd5       56cc512116c8f       24 seconds ago       Exited              mount-munger                0                   2097c338ab734       busybox-mount
	79805f790046f       82e4c8a736a4f       26 seconds ago       Running             echoserver                  0                   88289ed18ce25       hello-node-connect-57b4589c47-6jtbn
	a81b2c1c96a23       82e4c8a736a4f       26 seconds ago       Running             echoserver                  0                   35a6b1d071927       hello-node-6d85cfcfd8-f9pcn
	7115833608482       6e38f40d628db       40 seconds ago       Exited              storage-provisioner         3                   1c87913d6fc6b       storage-provisioner
	3b6182c0a1e6f       1f6d574d502f3       59 seconds ago       Running             kube-apiserver              0                   0d7ab1a1b1411       kube-apiserver-functional-406825
	055cecda9b96c       3861cfcd7c04c       59 seconds ago       Running             etcd                        2                   08f7e7048ec17       etcd-functional-406825
	b25f85a57e7f2       3edc18e7b7672       59 seconds ago       Running             kube-scheduler              2                   5db15c27519cd       kube-scheduler-functional-406825
	87b2c2b17b4d2       76932a3b37d7e       59 seconds ago       Running             kube-controller-manager     3                   880c93bbe909e       kube-controller-manager-functional-406825
	9467ea7805021       76932a3b37d7e       About a minute ago   Exited              kube-controller-manager     2                   880c93bbe909e       kube-controller-manager-functional-406825
	70cdf402796ff       3edc18e7b7672       About a minute ago   Exited              kube-scheduler              1                   5db15c27519cd       kube-scheduler-functional-406825
	e5a09d5dee13c       3861cfcd7c04c       About a minute ago   Exited              etcd                        1                   08f7e7048ec17       etcd-functional-406825
	1c2bd64fa098a       cbb01a7bd410d       About a minute ago   Running             coredns                     1                   2e2b1c3ae9f56       coredns-7db6d8ff4d-jktmb
	56df5f47a4f36       55bb025d2cfa5       About a minute ago   Running             kube-proxy                  1                   884e7a12c3459       kube-proxy-drw49
	4734c24ce252f       cbb01a7bd410d       2 minutes ago        Exited              coredns                     0                   2e2b1c3ae9f56       coredns-7db6d8ff4d-jktmb
	a4329523b3534       55bb025d2cfa5       2 minutes ago        Exited              kube-proxy                  0                   884e7a12c3459       kube-proxy-drw49
	
	
	==> containerd <==
	Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.205106284Z" level=info msg="CreateContainer within sandbox \"2a15baf358afaaf4193be011748f33458e37517342a0c71f1f653e08c0bd6519\" for &ContainerMetadata{Name:mysql,Attempt:0,} returns container id \"d595e113e36dbd2991505a208ad8004ef3e949325cea67c6cab4568154e60e6c\""
	Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.207148611Z" level=info msg="StartContainer for \"d595e113e36dbd2991505a208ad8004ef3e949325cea67c6cab4568154e60e6c\""
	Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.224217984Z" level=info msg="shim disconnected" id=77a0p1tpspptkl9wcbj51cpk0 namespace=k8s.io
	Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.225976606Z" level=warning msg="cleaning up after shim disconnected" id=77a0p1tpspptkl9wcbj51cpk0 namespace=k8s.io
	Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.226052020Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.689656065Z" level=info msg="StartContainer for \"d595e113e36dbd2991505a208ad8004ef3e949325cea67c6cab4568154e60e6c\" returns successfully"
	Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.930520114Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-406825\""
	Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.938167992Z" level=info msg="ImageCreate event name:\"sha256:ca33cbd93a7d78edf7bbc4ba7f5ceaab13402bd5e08d57b6fd628cf608e9d127\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Jul 31 19:38:30 functional-406825 containerd[3693]: time="2024-07-31T19:38:30.938894721Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-406825\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.137585672Z" level=info msg="StopPodSandbox for \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\""
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.191324972Z" level=info msg="TearDown network for sandbox \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\" successfully"
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.191372964Z" level=info msg="StopPodSandbox for \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\" returns successfully"
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.192015606Z" level=info msg="RemovePodSandbox for \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\""
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.192073007Z" level=info msg="Forcibly stopping sandbox \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\""
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.217199987Z" level=info msg="TearDown network for sandbox \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\" successfully"
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.228794961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.228901655Z" level=info msg="RemovePodSandbox \"47b534f6d1b67bc7df99a169a85bbcfc0348f718bd1b98e2c57b6738aa68da52\" returns successfully"
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229344627Z" level=info msg="StopPodSandbox for \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\""
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229441237Z" level=info msg="TearDown network for sandbox \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\" successfully"
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229465615Z" level=info msg="StopPodSandbox for \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\" returns successfully"
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229783378Z" level=info msg="RemovePodSandbox for \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\""
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229871285Z" level=info msg="Forcibly stopping sandbox \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\""
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.229924740Z" level=info msg="TearDown network for sandbox \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\" successfully"
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.237274927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jul 31 19:38:32 functional-406825 containerd[3693]: time="2024-07-31T19:38:32.237467368Z" level=info msg="RemovePodSandbox \"800de94cff5fa9d776758c5cd752b6ef7fec458c8dc0c6a57d0aba00c46434a3\" returns successfully"
	
	
	==> coredns [1c2bd64fa098a8776a450dc431d22e2857de84147c6670490bc1dd1b534471c1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48062 - 63647 "HINFO IN 3992222033566678196.4594852637004116219. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010896725s
	
	
	==> coredns [4734c24ce252fddbacaa087de98c8b525b4ada0576dce000a59a921b85f327d0] <==
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1846180525]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:35:42.098) (total time: 30001ms):
	Trace[1846180525]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:36:12.099)
	Trace[1846180525]: [30.001394109s] [30.001394109s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2057238692]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:35:42.098) (total time: 30001ms):
	Trace[2057238692]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:36:12.099)
	Trace[2057238692]: [30.001108762s] [30.001108762s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[291807057]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 19:35:42.099) (total time: 30001ms):
	Trace[291807057]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:36:12.100)
	Trace[291807057]: [30.001301648s] [30.001301648s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:49643 - 4586 "HINFO IN 7766713651010527523.4206985503821084800. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009773837s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-406825
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-406825
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0990a249c5f59d46ce7414523e8e67e8da946825
	                    minikube.k8s.io/name=functional-406825
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_35_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:35:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-406825
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:38:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:37:35 +0000   Wed, 31 Jul 2024 19:35:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:37:35 +0000   Wed, 31 Jul 2024 19:35:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:37:35 +0000   Wed, 31 Jul 2024 19:35:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:37:35 +0000   Wed, 31 Jul 2024 19:35:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.243
	  Hostname:    functional-406825
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 03368050aac543f58b19785ef3108713
	  System UUID:                03368050-aac5-43f5-8b19-785ef3108713
	  Boot ID:                    181c5881-803f-47ee-9c82-783afad1dc27
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d85cfcfd8-f9pcn                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     hello-node-connect-57b4589c47-6jtbn          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  default                     mysql-64454c8b5c-w77r4                       600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    14s
	  kube-system                 coredns-7db6d8ff4d-jktmb                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m52s
	  kube-system                 etcd-functional-406825                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m5s
	  kube-system                 kube-apiserver-functional-406825             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-controller-manager-functional-406825    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 kube-proxy-drw49                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	  kube-system                 kube-scheduler-functional-406825             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-c7rw5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-8c8zn        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%!)(MISSING)  700m (35%!)(MISSING)
	  memory             682Mi (17%!)(MISSING)  870Mi (22%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m50s                kube-proxy       
	  Normal  Starting                 118s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m5s                 kubelet          Node functional-406825 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m5s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    3m5s                 kubelet          Node functional-406825 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m5s                 kubelet          Node functional-406825 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m5s                 kubelet          Node functional-406825 status is now: NodeReady
	  Normal  RegisteredNode           2m52s                node-controller  Node functional-406825 event: Registered Node functional-406825 in Controller
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node functional-406825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node functional-406825 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x7 over 108s)  kubelet          Node functional-406825 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           94s                  node-controller  Node functional-406825 event: Registered Node functional-406825 in Controller
	  Normal  Starting                 60s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node functional-406825 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node functional-406825 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x7 over 60s)    kubelet          Node functional-406825 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           45s                  node-controller  Node functional-406825 event: Registered Node functional-406825 in Controller
	
	
	==> dmesg <==
	[  +0.143415] systemd-fstab-generator[2228]: Ignoring "noauto" option for root device
	[  +0.306049] systemd-fstab-generator[2257]: Ignoring "noauto" option for root device
	[  +1.733015] systemd-fstab-generator[2409]: Ignoring "noauto" option for root device
	[  +0.095931] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.693164] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.351308] kauditd_printk_skb: 21 callbacks suppressed
	[  +1.474531] systemd-fstab-generator[3120]: Ignoring "noauto" option for root device
	[  +6.945731] kauditd_printk_skb: 23 callbacks suppressed
	[Jul31 19:37] systemd-fstab-generator[3318]: Ignoring "noauto" option for root device
	[ +13.125870] systemd-fstab-generator[3618]: Ignoring "noauto" option for root device
	[  +0.097770] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.063034] systemd-fstab-generator[3630]: Ignoring "noauto" option for root device
	[  +0.171575] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[  +0.144135] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.313486] systemd-fstab-generator[3685]: Ignoring "noauto" option for root device
	[  +1.147859] systemd-fstab-generator[3848]: Ignoring "noauto" option for root device
	[ +10.888321] kauditd_printk_skb: 125 callbacks suppressed
	[  +1.370316] systemd-fstab-generator[4132]: Ignoring "noauto" option for root device
	[  +4.286509] kauditd_printk_skb: 41 callbacks suppressed
	[ +15.449615] systemd-fstab-generator[4477]: Ignoring "noauto" option for root device
	[  +5.955737] kauditd_printk_skb: 20 callbacks suppressed
	[Jul31 19:38] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.983729] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.015631] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.825164] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [055cecda9b96c4180b7f1f2927cb9a081af2a662fa7558589d69050ca26936b8] <==
	{"level":"info","ts":"2024-07-31T19:37:32.957782Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T19:37:32.957872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T19:37:32.95814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 switched to configuration voters=(5579817544954101747)"}
	{"level":"info","ts":"2024-07-31T19:37:32.959908Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c7dcc22c4a571085","local-member-id":"4d6f7e7e767b3ff3","added-peer-id":"4d6f7e7e767b3ff3","added-peer-peer-urls":["https://192.168.39.243:2380"]}
	{"level":"info","ts":"2024-07-31T19:37:32.960049Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c7dcc22c4a571085","local-member-id":"4d6f7e7e767b3ff3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:37:32.960103Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:37:32.968612Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T19:37:32.969259Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.243:2380"}
	{"level":"info","ts":"2024-07-31T19:37:32.969442Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.243:2380"}
	{"level":"info","ts":"2024-07-31T19:37:32.969908Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4d6f7e7e767b3ff3","initial-advertise-peer-urls":["https://192.168.39.243:2380"],"listen-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.243:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T19:37:32.971653Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T19:37:34.024576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T19:37:34.024632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T19:37:34.024671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 3"}
	{"level":"info","ts":"2024-07-31T19:37:34.024682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became candidate at term 4"}
	{"level":"info","ts":"2024-07-31T19:37:34.024687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgVoteResp from 4d6f7e7e767b3ff3 at term 4"}
	{"level":"info","ts":"2024-07-31T19:37:34.024696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became leader at term 4"}
	{"level":"info","ts":"2024-07-31T19:37:34.024711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4d6f7e7e767b3ff3 elected leader 4d6f7e7e767b3ff3 at term 4"}
	{"level":"info","ts":"2024-07-31T19:37:34.030148Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4d6f7e7e767b3ff3","local-member-attributes":"{Name:functional-406825 ClientURLs:[https://192.168.39.243:2379]}","request-path":"/0/members/4d6f7e7e767b3ff3/attributes","cluster-id":"c7dcc22c4a571085","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:37:34.030201Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:37:34.030514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:37:34.032784Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:37:34.032986Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:37:34.03346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.243:2379"}
	{"level":"info","ts":"2024-07-31T19:37:34.035338Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e5a09d5dee13cd4329bb354ac57ebf3d25435aa03f27b2e513b7835c15be9ecf] <==
	{"level":"info","ts":"2024-07-31T19:36:32.778743Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.243:2380"}
	{"level":"info","ts":"2024-07-31T19:36:33.755879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T19:36:33.755961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T19:36:33.756002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgPreVoteResp from 4d6f7e7e767b3ff3 at term 2"}
	{"level":"info","ts":"2024-07-31T19:36:33.756151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T19:36:33.756178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 received MsgVoteResp from 4d6f7e7e767b3ff3 at term 3"}
	{"level":"info","ts":"2024-07-31T19:36:33.756295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4d6f7e7e767b3ff3 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T19:36:33.756319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4d6f7e7e767b3ff3 elected leader 4d6f7e7e767b3ff3 at term 3"}
	{"level":"info","ts":"2024-07-31T19:36:33.764183Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4d6f7e7e767b3ff3","local-member-attributes":"{Name:functional-406825 ClientURLs:[https://192.168.39.243:2379]}","request-path":"/0/members/4d6f7e7e767b3ff3/attributes","cluster-id":"c7dcc22c4a571085","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:36:33.764436Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:36:33.765016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:36:33.767011Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:36:33.767043Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:36:33.770692Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T19:36:33.789032Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.243:2379"}
	{"level":"info","ts":"2024-07-31T19:37:30.583778Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T19:37:30.583956Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-406825","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"]}
	{"level":"warn","ts":"2024-07-31T19:37:30.584048Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:37:30.584079Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:37:30.58579Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.243:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:37:30.585859Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.243:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T19:37:30.585902Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4d6f7e7e767b3ff3","current-leader-member-id":"4d6f7e7e767b3ff3"}
	{"level":"info","ts":"2024-07-31T19:37:30.589075Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.243:2380"}
	{"level":"info","ts":"2024-07-31T19:37:30.589179Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.243:2380"}
	{"level":"info","ts":"2024-07-31T19:37:30.589188Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-406825","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.243:2380"],"advertise-client-urls":["https://192.168.39.243:2379"]}
	
	
	==> kernel <==
	 19:38:32 up 3 min,  0 users,  load average: 1.69, 0.65, 0.25
	Linux functional-406825 5.10.207 #1 SMP Wed Jul 31 15:10:11 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3b6182c0a1e6f18fd53ab4dbf00d5335cd376fab510c2dbf2cd0300582f35c73] <==
	I0731 19:37:35.319423       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 19:37:35.319609       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 19:37:35.326320       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 19:37:35.326543       1 aggregator.go:165] initial CRD sync complete...
	I0731 19:37:35.326702       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 19:37:35.326749       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 19:37:35.326880       1 cache.go:39] Caches are synced for autoregister controller
	I0731 19:37:35.351987       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 19:37:36.202316       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 19:37:36.536083       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.243]
	I0731 19:37:36.538680       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 19:37:36.546303       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 19:37:36.807457       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 19:37:36.821430       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 19:37:36.869479       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 19:37:36.894048       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 19:37:36.902295       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 19:37:57.533065       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.123.44"}
	I0731 19:38:01.387630       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 19:38:01.498554       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.44.172"}
	I0731 19:38:03.335777       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.244.63"}
	I0731 19:38:04.725156       1 controller.go:615] quota admission added evaluator for: namespaces
	I0731 19:38:05.098790       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.199.87"}
	I0731 19:38:05.157054       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.144.204"}
	I0731 19:38:17.988325       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.172.2"}
	
	
	==> kube-controller-manager [87b2c2b17b4d24855f522d4e55dae3fed9e8133c1e5abe3a3d7c261cc642c399] <==
	I0731 19:38:04.933190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="11.479608ms"
	E0731 19:38:04.933241       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 19:38:05.006545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="32.761528ms"
	I0731 19:38:05.044518       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="37.912257ms"
	I0731 19:38:05.044617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="33.204µs"
	I0731 19:38:05.045314       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="75.08µs"
	I0731 19:38:05.053135       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="47.519565ms"
	I0731 19:38:05.145524       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="92.214742ms"
	I0731 19:38:05.145873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="316.928µs"
	I0731 19:38:05.146272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="180.512µs"
	I0731 19:38:05.151519       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="75.917µs"
	I0731 19:38:06.317927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="14.30351ms"
	I0731 19:38:06.318057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-57b4589c47" duration="66.692µs"
	I0731 19:38:06.333111       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="11.421925ms"
	I0731 19:38:06.333184       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-6d85cfcfd8" duration="32.869µs"
	I0731 19:38:14.351058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="10.05001ms"
	I0731 19:38:14.351606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="44.982µs"
	I0731 19:38:16.354176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.557563ms"
	I0731 19:38:16.354939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="52.834µs"
	I0731 19:38:18.076537       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="33.02873ms"
	I0731 19:38:18.105325       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="28.748776ms"
	I0731 19:38:18.105388       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="37.105µs"
	I0731 19:38:18.117247       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="34.335µs"
	I0731 19:38:31.793165       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="61.910259ms"
	I0731 19:38:31.793928       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="115.383µs"
	
	
	==> kube-controller-manager [9467ea7805021bd3313c4a56b8d5ebf71f859a118f403985642acea447540c90] <==
	I0731 19:36:58.809158       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0731 19:36:58.809165       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0731 19:36:58.809511       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0731 19:36:58.811992       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 19:36:58.813465       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0731 19:36:58.819637       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0731 19:36:58.824664       1 shared_informer.go:320] Caches are synced for namespace
	I0731 19:36:58.830627       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 19:36:58.831885       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 19:36:58.836246       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0731 19:36:58.839763       1 shared_informer.go:320] Caches are synced for TTL
	I0731 19:36:58.840985       1 shared_informer.go:320] Caches are synced for PVC protection
	I0731 19:36:58.848242       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0731 19:36:58.848489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="188.184µs"
	I0731 19:36:58.865778       1 shared_informer.go:320] Caches are synced for daemon sets
	I0731 19:36:58.883192       1 shared_informer.go:320] Caches are synced for disruption
	I0731 19:36:58.890691       1 shared_informer.go:320] Caches are synced for stateful set
	I0731 19:36:58.933146       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 19:36:58.966281       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 19:36:58.970653       1 shared_informer.go:320] Caches are synced for deployment
	I0731 19:36:58.984922       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 19:36:59.017962       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0731 19:36:59.450979       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 19:36:59.464326       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 19:36:59.464361       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [56df5f47a4f36d8cda6aaecfceaa9d39680ceca6ad8a5ae362a55e7382716bb7] <==
	E0731 19:36:34.230114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	W0731 19:36:34.230178       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	E0731 19:36:34.230220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	E0731 19:36:34.230311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	W0731 19:36:35.105776       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	E0731 19:36:35.105962       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	W0731 19:36:35.474691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	E0731 19:36:35.474771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	W0731 19:36:35.679566       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	E0731 19:36:35.679639       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	W0731 19:36:37.232167       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	E0731 19:36:37.232235       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	W0731 19:36:37.801891       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	E0731 19:36:37.802005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	W0731 19:36:38.382467       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	E0731 19:36:38.382521       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	W0731 19:36:41.924123       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	E0731 19:36:41.924204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	W0731 19:36:42.151317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	E0731 19:36:42.151430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-406825&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	W0731 19:36:42.691662       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	E0731 19:36:42.691703       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.243:8441: connect: connection refused
	I0731 19:36:49.328798       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:36:50.728678       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:36:53.829723       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a4329523b353453860a827922993f8e3da55a645e509f571832c764f2383e96e] <==
	I0731 19:35:41.559443       1 server_linux.go:69] "Using iptables proxy"
	I0731 19:35:41.572403       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.243"]
	I0731 19:35:41.691837       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:35:41.691907       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:35:41.691926       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:35:41.717252       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:35:41.717412       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:35:41.717421       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:35:41.753163       1 config.go:192] "Starting service config controller"
	I0731 19:35:41.753190       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:35:41.753218       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:35:41.753222       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:35:41.753927       1 config.go:319] "Starting node config controller"
	I0731 19:35:41.753949       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:35:41.853799       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:35:41.853986       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:35:41.854404       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [70cdf402796ffad8d7224e1865f68eb70fe3b2bc991e265c287d04634256b221] <==
	I0731 19:36:34.135999       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0731 19:36:34.144007       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 19:36:34.236987       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0731 19:36:34.244460       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0731 19:36:34.244519       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 19:36:46.526165       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0731 19:36:46.527927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0731 19:36:46.528031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E0731 19:36:46.528096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)
	E0731 19:36:46.528146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)
	E0731 19:36:46.528179       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0731 19:36:46.528234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E0731 19:36:46.528293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E0731 19:36:46.528338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)
	E0731 19:36:46.528394       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E0731 19:36:46.528452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0731 19:36:46.528575       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)
	E0731 19:36:46.528650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E0731 19:36:46.531049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0731 19:36:46.535066       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E0731 19:36:46.535216       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E0731 19:36:46.535326       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	I0731 19:37:30.527968       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0731 19:37:30.528050       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0731 19:37:30.528174       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b25f85a57e7f235a82c44cc6d4957430cbf9b17d56b28cca1a4d1a359828c205] <==
	I0731 19:37:33.566550       1 serving.go:380] Generated self-signed cert in-memory
	W0731 19:37:35.243307       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 19:37:35.243352       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:37:35.243362       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 19:37:35.243369       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 19:37:35.288342       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 19:37:35.288375       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:37:35.292976       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 19:37:35.293011       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 19:37:35.296275       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 19:37:35.296341       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 19:37:35.393249       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 19:38:05 functional-406825 kubelet[4139]: I0731 19:38:05.115555    4139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbqmf\" (UniqueName: \"kubernetes.io/projected/c4b6b90a-911b-40bb-82ce-5bd8a541e541-kube-api-access-lbqmf\") pod \"dashboard-metrics-scraper-b5fc48f67-c7rw5\" (UID: \"c4b6b90a-911b-40bb-82ce-5bd8a541e541\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-c7rw5"
	Jul 31 19:38:06 functional-406825 kubelet[4139]: I0731 19:38:06.319510    4139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-connect-57b4589c47-6jtbn" podStartSLOduration=1.805543084 podStartE2EDuration="3.319493251s" podCreationTimestamp="2024-07-31 19:38:03 +0000 UTC" firstStartedPulling="2024-07-31 19:38:03.890509195 +0000 UTC m=+31.933084810" lastFinishedPulling="2024-07-31 19:38:05.404459362 +0000 UTC m=+33.447034977" observedRunningTime="2024-07-31 19:38:06.302588697 +0000 UTC m=+34.345164332" watchObservedRunningTime="2024-07-31 19:38:06.319493251 +0000 UTC m=+34.362068922"
	Jul 31 19:38:08 functional-406825 kubelet[4139]: I0731 19:38:08.101708    4139 scope.go:117] "RemoveContainer" containerID="71158336084828f2bdf770c1218462303e6331f6a4cead6203ab3da979315261"
	Jul 31 19:38:08 functional-406825 kubelet[4139]: E0731 19:38:08.101951    4139 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(973a1136-6225-4187-9281-07f81c5f86bc)\"" pod="kube-system/storage-provisioner" podUID="973a1136-6225-4187-9281-07f81c5f86bc"
	Jul 31 19:38:08 functional-406825 kubelet[4139]: I0731 19:38:08.117402    4139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-node-6d85cfcfd8-f9pcn" podStartSLOduration=3.835114344 podStartE2EDuration="7.117385798s" podCreationTimestamp="2024-07-31 19:38:01 +0000 UTC" firstStartedPulling="2024-07-31 19:38:01.978227926 +0000 UTC m=+30.020803555" lastFinishedPulling="2024-07-31 19:38:05.260499385 +0000 UTC m=+33.303075009" observedRunningTime="2024-07-31 19:38:06.32055812 +0000 UTC m=+34.363133747" watchObservedRunningTime="2024-07-31 19:38:08.117385798 +0000 UTC m=+36.159961433"
	Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.549163    4139 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2tpv\" (UniqueName: \"kubernetes.io/projected/b155e1cc-6736-497c-8687-7094e35b8f3c-kube-api-access-w2tpv\") pod \"b155e1cc-6736-497c-8687-7094e35b8f3c\" (UID: \"b155e1cc-6736-497c-8687-7094e35b8f3c\") "
	Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.549229    4139 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b155e1cc-6736-497c-8687-7094e35b8f3c-test-volume\") pod \"b155e1cc-6736-497c-8687-7094e35b8f3c\" (UID: \"b155e1cc-6736-497c-8687-7094e35b8f3c\") "
	Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.549332    4139 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b155e1cc-6736-497c-8687-7094e35b8f3c-test-volume" (OuterVolumeSpecName: "test-volume") pod "b155e1cc-6736-497c-8687-7094e35b8f3c" (UID: "b155e1cc-6736-497c-8687-7094e35b8f3c"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.556793    4139 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b155e1cc-6736-497c-8687-7094e35b8f3c-kube-api-access-w2tpv" (OuterVolumeSpecName: "kube-api-access-w2tpv") pod "b155e1cc-6736-497c-8687-7094e35b8f3c" (UID: "b155e1cc-6736-497c-8687-7094e35b8f3c"). InnerVolumeSpecName "kube-api-access-w2tpv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.654794    4139 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w2tpv\" (UniqueName: \"kubernetes.io/projected/b155e1cc-6736-497c-8687-7094e35b8f3c-kube-api-access-w2tpv\") on node \"functional-406825\" DevicePath \"\""
	Jul 31 19:38:09 functional-406825 kubelet[4139]: I0731 19:38:09.654876    4139 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b155e1cc-6736-497c-8687-7094e35b8f3c-test-volume\") on node \"functional-406825\" DevicePath \"\""
	Jul 31 19:38:10 functional-406825 kubelet[4139]: I0731 19:38:10.313656    4139 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2097c338ab734b7ecd72810e162a308801482b8a706dd1826b4c8ba1ac3705f2"
	Jul 31 19:38:16 functional-406825 kubelet[4139]: I0731 19:38:16.343457    4139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-8c8zn" podStartSLOduration=4.620877292 podStartE2EDuration="12.34344185s" podCreationTimestamp="2024-07-31 19:38:04 +0000 UTC" firstStartedPulling="2024-07-31 19:38:05.686735725 +0000 UTC m=+33.729311350" lastFinishedPulling="2024-07-31 19:38:13.409300292 +0000 UTC m=+41.451875908" observedRunningTime="2024-07-31 19:38:14.338580877 +0000 UTC m=+42.381156512" watchObservedRunningTime="2024-07-31 19:38:16.34344185 +0000 UTC m=+44.386017482"
	Jul 31 19:38:18 functional-406825 kubelet[4139]: I0731 19:38:18.076493    4139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-c7rw5" podStartSLOduration=2.847365579 podStartE2EDuration="13.076464909s" podCreationTimestamp="2024-07-31 19:38:05 +0000 UTC" firstStartedPulling="2024-07-31 19:38:05.72202145 +0000 UTC m=+33.764597065" lastFinishedPulling="2024-07-31 19:38:15.951120778 +0000 UTC m=+43.993696395" observedRunningTime="2024-07-31 19:38:16.346052879 +0000 UTC m=+44.388628514" watchObservedRunningTime="2024-07-31 19:38:18.076464909 +0000 UTC m=+46.119040541"
	Jul 31 19:38:18 functional-406825 kubelet[4139]: I0731 19:38:18.076730    4139 topology_manager.go:215] "Topology Admit Handler" podUID="33fc257c-36ce-4d7d-a555-802a3b48cba3" podNamespace="default" podName="mysql-64454c8b5c-w77r4"
	Jul 31 19:38:18 functional-406825 kubelet[4139]: E0731 19:38:18.076857    4139 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b155e1cc-6736-497c-8687-7094e35b8f3c" containerName="mount-munger"
	Jul 31 19:38:18 functional-406825 kubelet[4139]: I0731 19:38:18.076893    4139 memory_manager.go:354] "RemoveStaleState removing state" podUID="b155e1cc-6736-497c-8687-7094e35b8f3c" containerName="mount-munger"
	Jul 31 19:38:18 functional-406825 kubelet[4139]: I0731 19:38:18.215396    4139 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckrn2\" (UniqueName: \"kubernetes.io/projected/33fc257c-36ce-4d7d-a555-802a3b48cba3-kube-api-access-ckrn2\") pod \"mysql-64454c8b5c-w77r4\" (UID: \"33fc257c-36ce-4d7d-a555-802a3b48cba3\") " pod="default/mysql-64454c8b5c-w77r4"
	Jul 31 19:38:20 functional-406825 kubelet[4139]: I0731 19:38:20.101484    4139 scope.go:117] "RemoveContainer" containerID="71158336084828f2bdf770c1218462303e6331f6a4cead6203ab3da979315261"
	Jul 31 19:38:31 functional-406825 kubelet[4139]: I0731 19:38:31.722436    4139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/mysql-64454c8b5c-w77r4" podStartSLOduration=2.5177842569999997 podStartE2EDuration="13.72242039s" podCreationTimestamp="2024-07-31 19:38:18 +0000 UTC" firstStartedPulling="2024-07-31 19:38:18.893102629 +0000 UTC m=+46.935678257" lastFinishedPulling="2024-07-31 19:38:30.097738774 +0000 UTC m=+58.140314390" observedRunningTime="2024-07-31 19:38:31.714548437 +0000 UTC m=+59.757124073" watchObservedRunningTime="2024-07-31 19:38:31.72242039 +0000 UTC m=+59.764996025"
	Jul 31 19:38:32 functional-406825 kubelet[4139]: E0731 19:38:32.158101    4139 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:38:32 functional-406825 kubelet[4139]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:38:32 functional-406825 kubelet[4139]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:38:32 functional-406825 kubelet[4139]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:38:32 functional-406825 kubelet[4139]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> kubernetes-dashboard [47999e35dc24915a773e1185fd92cc8588af0b1b0505a9091c99c7bd86e7c4e3] <==
	2024/07/31 19:38:13 Starting overwatch
	2024/07/31 19:38:13 Using namespace: kubernetes-dashboard
	2024/07/31 19:38:13 Using in-cluster config to connect to apiserver
	2024/07/31 19:38:13 Using secret token for csrf signing
	2024/07/31 19:38:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/31 19:38:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/31 19:38:13 Successful initial request to the apiserver, version: v1.30.3
	2024/07/31 19:38:13 Generating JWE encryption key
	2024/07/31 19:38:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/31 19:38:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/31 19:38:14 Initializing JWE encryption key from synchronized object
	2024/07/31 19:38:14 Creating in-cluster Sidecar client
	2024/07/31 19:38:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/31 19:38:14 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [71158336084828f2bdf770c1218462303e6331f6a4cead6203ab3da979315261] <==
	I0731 19:37:52.260263       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 19:37:52.262748       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b83ebda98a07f0ea988f240d06c292fd6fe8800582e555fd805c17b20b74e7a8] <==
	I0731 19:38:20.214911       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 19:38:20.224630       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 19:38:20.225690       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-406825 -n functional-406825
helpers_test.go:261: (dbg) Run:  kubectl --context functional-406825 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-406825 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-406825 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-406825/192.168.39.243
	Start Time:       Wed, 31 Jul 2024 19:38:03 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  mount-munger:
	    Container ID:  containerd://faafbb13ccfd582f2820e418ac00dc70d545b21f4283fa15e4c7eac0608f4656
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 31 Jul 2024 19:38:07 +0000
	      Finished:     Wed, 31 Jul 2024 19:38:07 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w2tpv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-w2tpv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  30s   default-scheduler  Successfully assigned default/busybox-mount to functional-406825
	  Normal  Pulling    29s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     26s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.182s (3.351s including waiting). Image size: 2395207 bytes.
	  Normal  Created    26s   kubelet            Created container mount-munger
	  Normal  Started    26s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (30.39s)

                                                
                                    

Test pass (294/334)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 26.52
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 11.91
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 19.96
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.56
31 TestOffline 58.45
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 215.75
38 TestAddons/serial/Volcano 40.65
40 TestAddons/serial/GCPAuth/Namespaces 0.12
42 TestAddons/parallel/Registry 15.95
43 TestAddons/parallel/Ingress 20.23
44 TestAddons/parallel/InspektorGadget 11.95
45 TestAddons/parallel/MetricsServer 6.71
46 TestAddons/parallel/HelmTiller 10.68
48 TestAddons/parallel/CSI 58.28
49 TestAddons/parallel/Headlamp 28.65
50 TestAddons/parallel/CloudSpanner 6.57
51 TestAddons/parallel/LocalPath 52.98
52 TestAddons/parallel/NvidiaDevicePlugin 5.49
53 TestAddons/parallel/Yakd 11.78
54 TestAddons/StoppedEnableDisable 91.85
55 TestCertOptions 74.99
56 TestCertExpiration 307.48
58 TestForceSystemdFlag 74.27
59 TestForceSystemdEnv 96.63
61 TestKVMDriverInstallOrUpdate 4.29
65 TestErrorSpam/setup 38.53
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.7
68 TestErrorSpam/pause 1.45
69 TestErrorSpam/unpause 1.49
70 TestErrorSpam/stop 4.23
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 96.44
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 45.27
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.03
82 TestFunctional/serial/CacheCmd/cache/add_local 2.13
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
87 TestFunctional/serial/CacheCmd/cache/delete 0.1
88 TestFunctional/serial/MinikubeKubectlCmd 0.11
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 38.41
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.24
93 TestFunctional/serial/LogsFileCmd 1.3
94 TestFunctional/serial/InvalidService 3.94
96 TestFunctional/parallel/ConfigCmd 0.33
97 TestFunctional/parallel/DashboardCmd 12.89
98 TestFunctional/parallel/DryRun 0.34
99 TestFunctional/parallel/InternationalLanguage 0.18
100 TestFunctional/parallel/StatusCmd 0.98
105 TestFunctional/parallel/AddonsCmd 0.12
106 TestFunctional/parallel/PersistentVolumeClaim 48.96
108 TestFunctional/parallel/SSHCmd 0.38
109 TestFunctional/parallel/CpCmd 1.31
110 TestFunctional/parallel/MySQL 25.71
111 TestFunctional/parallel/FileSync 0.2
112 TestFunctional/parallel/CertSync 1.17
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
120 TestFunctional/parallel/License 0.62
121 TestFunctional/parallel/ServiceCmd/DeployApp 11.22
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
123 TestFunctional/parallel/MountCmd/any-port 8.55
124 TestFunctional/parallel/ProfileCmd/profile_list 0.32
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
126 TestFunctional/parallel/MountCmd/specific-port 1.71
127 TestFunctional/parallel/MountCmd/VerifyCleanup 1.46
128 TestFunctional/parallel/ServiceCmd/List 0.84
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.88
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
140 TestFunctional/parallel/ServiceCmd/Format 0.27
141 TestFunctional/parallel/ServiceCmd/URL 0.28
142 TestFunctional/parallel/Version/short 0.05
143 TestFunctional/parallel/Version/components 0.47
144 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
145 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
146 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
147 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
148 TestFunctional/parallel/ImageCommands/ImageBuild 4.62
149 TestFunctional/parallel/ImageCommands/Setup 1.75
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.81
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.08
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.21
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.61
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.23
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 223.73
167 TestMultiControlPlane/serial/DeployApp 5.8
168 TestMultiControlPlane/serial/PingHostFromPods 1.16
169 TestMultiControlPlane/serial/AddWorkerNode 55.44
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
172 TestMultiControlPlane/serial/CopyFile 12.59
173 TestMultiControlPlane/serial/StopSecondaryNode 92.06
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.39
175 TestMultiControlPlane/serial/RestartSecondaryNode 36.76
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.52
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 414.48
178 TestMultiControlPlane/serial/DeleteSecondaryNode 7.59
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
180 TestMultiControlPlane/serial/StopCluster 274.41
181 TestMultiControlPlane/serial/RestartCluster 118.22
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
183 TestMultiControlPlane/serial/AddSecondaryNode 74.4
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
188 TestJSONOutput/start/Command 93.59
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.67
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.6
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 6.42
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.05
217 TestMinikubeProfile 90.73
220 TestMountStart/serial/StartWithMountFirst 31.06
221 TestMountStart/serial/VerifyMountFirst 0.38
222 TestMountStart/serial/StartWithMountSecond 24.43
223 TestMountStart/serial/VerifyMountSecond 0.38
224 TestMountStart/serial/DeleteFirst 0.67
225 TestMountStart/serial/VerifyMountPostDelete 0.38
226 TestMountStart/serial/Stop 1.27
227 TestMountStart/serial/RestartStopped 24.32
228 TestMountStart/serial/VerifyMountPostStop 0.37
231 TestMultiNode/serial/FreshStart2Nodes 124.64
232 TestMultiNode/serial/DeployApp2Nodes 5.11
233 TestMultiNode/serial/PingHostFrom2Pods 0.79
234 TestMultiNode/serial/AddNode 47.69
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 7.12
238 TestMultiNode/serial/StopNode 2.11
239 TestMultiNode/serial/StartAfterStop 34.71
240 TestMultiNode/serial/RestartKeepsNodes 315.69
241 TestMultiNode/serial/DeleteNode 2.26
242 TestMultiNode/serial/StopMultiNode 183.1
243 TestMultiNode/serial/RestartMultiNode 89.93
244 TestMultiNode/serial/ValidateNameConflict 41.86
249 TestPreload 321.37
251 TestScheduledStopUnix 110.59
255 TestRunningBinaryUpgrade 142.48
257 TestKubernetesUpgrade 189.24
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 121.22
269 TestNetworkPlugins/group/false 3.1
273 TestNoKubernetes/serial/StartWithStopK8s 21.11
274 TestNoKubernetes/serial/Start 38.57
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
276 TestNoKubernetes/serial/ProfileList 1.43
277 TestNoKubernetes/serial/Stop 1.29
278 TestNoKubernetes/serial/StartNoArgs 38.05
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
280 TestStoppedBinaryUpgrade/Setup 2.41
281 TestStoppedBinaryUpgrade/Upgrade 151.4
290 TestPause/serial/Start 64.72
291 TestNetworkPlugins/group/auto/Start 122.79
292 TestNetworkPlugins/group/kindnet/Start 101.71
293 TestPause/serial/SecondStartNoReconfiguration 57.5
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.84
295 TestNetworkPlugins/group/calico/Start 95.11
296 TestPause/serial/Pause 0.69
297 TestPause/serial/VerifyStatus 0.24
298 TestPause/serial/Unpause 0.61
299 TestPause/serial/PauseAgain 0.77
300 TestPause/serial/DeletePaused 1.02
301 TestPause/serial/VerifyDeletedResources 0.61
302 TestNetworkPlugins/group/custom-flannel/Start 85.22
303 TestNetworkPlugins/group/auto/KubeletFlags 0.21
304 TestNetworkPlugins/group/auto/NetCatPod 9.28
305 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
306 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
307 TestNetworkPlugins/group/kindnet/NetCatPod 9.25
308 TestNetworkPlugins/group/auto/DNS 0.25
309 TestNetworkPlugins/group/auto/Localhost 0.19
310 TestNetworkPlugins/group/auto/HairPin 0.22
311 TestNetworkPlugins/group/kindnet/DNS 0.18
312 TestNetworkPlugins/group/kindnet/Localhost 0.13
313 TestNetworkPlugins/group/kindnet/HairPin 0.14
314 TestNetworkPlugins/group/enable-default-cni/Start 102.15
315 TestNetworkPlugins/group/flannel/Start 97.82
316 TestNetworkPlugins/group/calico/ControllerPod 6.01
317 TestNetworkPlugins/group/calico/KubeletFlags 0.2
318 TestNetworkPlugins/group/calico/NetCatPod 9.21
319 TestNetworkPlugins/group/calico/DNS 0.16
320 TestNetworkPlugins/group/calico/Localhost 0.13
321 TestNetworkPlugins/group/calico/HairPin 0.13
322 TestNetworkPlugins/group/bridge/Start 108.42
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.81
325 TestNetworkPlugins/group/custom-flannel/DNS 0.17
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
329 TestStartStop/group/old-k8s-version/serial/FirstStart 181.92
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.2
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
333 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
334 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
335 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
336 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
337 TestNetworkPlugins/group/flannel/NetCatPod 10.3
338 TestNetworkPlugins/group/flannel/DNS 0.18
339 TestNetworkPlugins/group/flannel/Localhost 0.15
340 TestNetworkPlugins/group/flannel/HairPin 0.15
342 TestStartStop/group/no-preload/serial/FirstStart 102.16
344 TestStartStop/group/embed-certs/serial/FirstStart 82.42
345 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
346 TestNetworkPlugins/group/bridge/NetCatPod 9.22
347 TestNetworkPlugins/group/bridge/DNS 0.21
348 TestNetworkPlugins/group/bridge/Localhost 0.22
349 TestNetworkPlugins/group/bridge/HairPin 0.15
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.94
352 TestStartStop/group/embed-certs/serial/DeployApp 10.3
353 TestStartStop/group/no-preload/serial/DeployApp 9.28
354 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
355 TestStartStop/group/embed-certs/serial/Stop 91.62
356 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
357 TestStartStop/group/no-preload/serial/Stop 91.56
358 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.25
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.57
361 TestStartStop/group/old-k8s-version/serial/DeployApp 9.42
362 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.96
363 TestStartStop/group/old-k8s-version/serial/Stop 91.65
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
365 TestStartStop/group/embed-certs/serial/SecondStart 292.15
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
367 TestStartStop/group/no-preload/serial/SecondStart 318.99
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
369 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 344.65
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
371 TestStartStop/group/old-k8s-version/serial/SecondStart 435.47
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
374 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
375 TestStartStop/group/embed-certs/serial/Pause 2.65
377 TestStartStop/group/newest-cni/serial/FirstStart 46.53
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 8.01
379 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
380 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
381 TestStartStop/group/no-preload/serial/Pause 2.75
382 TestStartStop/group/newest-cni/serial/DeployApp 0
383 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
384 TestStartStop/group/newest-cni/serial/Stop 2.33
385 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
386 TestStartStop/group/newest-cni/serial/SecondStart 32.43
387 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
388 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
389 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
390 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.61
391 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
393 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
394 TestStartStop/group/newest-cni/serial/Pause 2.29
395 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
396 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
397 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
398 TestStartStop/group/old-k8s-version/serial/Pause 2.29
x
+
TestDownloadOnly/v1.20.0/json-events (26.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-653161 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-653161 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (26.521370639s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (26.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-653161
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-653161: exit status 85 (58.686195ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-653161 | jenkins | v1.33.1 | 31 Jul 24 19:25 UTC |          |
	|         | -p download-only-653161        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:25:43
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:25:43.761382  624161 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:25:43.761637  624161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:25:43.761645  624161 out.go:304] Setting ErrFile to fd 2...
	I0731 19:25:43.761654  624161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:25:43.762222  624161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
	W0731 19:25:43.762444  624161 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19355-616888/.minikube/config/config.json: open /home/jenkins/minikube-integration/19355-616888/.minikube/config/config.json: no such file or directory
	I0731 19:25:43.763392  624161 out.go:298] Setting JSON to true
	I0731 19:25:43.764588  624161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":11288,"bootTime":1722442656,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:25:43.764653  624161 start.go:139] virtualization: kvm guest
	I0731 19:25:43.766946  624161 out.go:97] [download-only-653161] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:25:43.767059  624161 notify.go:220] Checking for updates...
	W0731 19:25:43.767071  624161 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19355-616888/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 19:25:43.768381  624161 out.go:169] MINIKUBE_LOCATION=19355
	I0731 19:25:43.769736  624161 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:25:43.771009  624161 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	I0731 19:25:43.772170  624161 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	I0731 19:25:43.773307  624161 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 19:25:43.775650  624161 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 19:25:43.775865  624161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:25:43.807746  624161 out.go:97] Using the kvm2 driver based on user configuration
	I0731 19:25:43.807772  624161 start.go:297] selected driver: kvm2
	I0731 19:25:43.807779  624161 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:25:43.808117  624161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:25:43.808212  624161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-616888/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:25:43.823608  624161 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:25:43.823658  624161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:25:43.824142  624161 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 19:25:43.824307  624161 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 19:25:43.824391  624161 cni.go:84] Creating CNI manager for ""
	I0731 19:25:43.824406  624161 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0731 19:25:43.824417  624161 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 19:25:43.824493  624161 start.go:340] cluster config:
	{Name:download-only-653161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-653161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:25:43.824773  624161 iso.go:125] acquiring lock: {Name:mkf228a10da0353a91c9c6584611941c9f887339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:25:43.826485  624161 out.go:97] Downloading VM boot image ...
	I0731 19:25:43.826515  624161 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19355-616888/.minikube/cache/iso/amd64/minikube-v1.33.1-1722420371-19355-amd64.iso
	I0731 19:25:56.649031  624161 out.go:97] Starting "download-only-653161" primary control-plane node in "download-only-653161" cluster
	I0731 19:25:56.649054  624161 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0731 19:25:56.749041  624161 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0731 19:25:56.749082  624161 cache.go:56] Caching tarball of preloaded images
	I0731 19:25:56.749272  624161 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0731 19:25:56.751071  624161 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 19:25:56.751090  624161 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0731 19:25:56.848120  624161 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/19355-616888/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0731 19:26:08.567694  624161 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0731 19:26:08.567797  624161 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19355-616888/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-653161 host does not exist
	  To start a cluster, run: "minikube start -p download-only-653161"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-653161
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (11.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-536524 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-536524 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (11.91002522s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (11.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-536524
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-536524: exit status 85 (58.560565ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-653161 | jenkins | v1.33.1 | 31 Jul 24 19:25 UTC |                     |
	|         | -p download-only-653161        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC | 31 Jul 24 19:26 UTC |
	| delete  | -p download-only-653161        | download-only-653161 | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC | 31 Jul 24 19:26 UTC |
	| start   | -o=json --download-only        | download-only-536524 | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC |                     |
	|         | -p download-only-536524        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:26:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:26:10.603305  624416 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:26:10.603603  624416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:26:10.603615  624416 out.go:304] Setting ErrFile to fd 2...
	I0731 19:26:10.603621  624416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:26:10.603803  624416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
	I0731 19:26:10.604391  624416 out.go:298] Setting JSON to true
	I0731 19:26:10.605464  624416 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":11315,"bootTime":1722442656,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:26:10.605525  624416 start.go:139] virtualization: kvm guest
	I0731 19:26:10.607657  624416 out.go:97] [download-only-536524] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:26:10.607820  624416 notify.go:220] Checking for updates...
	I0731 19:26:10.609175  624416 out.go:169] MINIKUBE_LOCATION=19355
	I0731 19:26:10.610615  624416 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:26:10.611918  624416 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	I0731 19:26:10.613304  624416 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	I0731 19:26:10.614675  624416 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 19:26:10.616923  624416 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 19:26:10.617160  624416 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:26:10.648518  624416 out.go:97] Using the kvm2 driver based on user configuration
	I0731 19:26:10.648561  624416 start.go:297] selected driver: kvm2
	I0731 19:26:10.648567  624416 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:26:10.648893  624416 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:26:10.648968  624416 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-616888/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:26:10.663935  624416 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:26:10.663987  624416 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:26:10.664476  624416 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 19:26:10.664651  624416 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 19:26:10.664688  624416 cni.go:84] Creating CNI manager for ""
	I0731 19:26:10.664703  624416 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0731 19:26:10.664720  624416 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 19:26:10.664780  624416 start.go:340] cluster config:
	{Name:download-only-536524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-536524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:26:10.664911  624416 iso.go:125] acquiring lock: {Name:mkf228a10da0353a91c9c6584611941c9f887339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:26:10.666678  624416 out.go:97] Starting "download-only-536524" primary control-plane node in "download-only-536524" cluster
	I0731 19:26:10.666702  624416 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0731 19:26:10.763612  624416 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4
	I0731 19:26:10.763655  624416 cache.go:56] Caching tarball of preloaded images
	I0731 19:26:10.763830  624416 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0731 19:26:10.765683  624416 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 19:26:10.765704  624416 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4 ...
	I0731 19:26:10.864841  624416 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:1b8c063785761b3e6ff228c42e3a8cf1 -> /home/jenkins/minikube-integration/19355-616888/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-536524 host does not exist
	  To start a cluster, run: "minikube start -p download-only-536524"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-536524
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (19.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-974741 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-974741 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (19.964073024s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (19.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-974741
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-974741: exit status 85 (60.179003ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-653161 | jenkins | v1.33.1 | 31 Jul 24 19:25 UTC |                     |
	|         | -p download-only-653161             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC | 31 Jul 24 19:26 UTC |
	| delete  | -p download-only-653161             | download-only-653161 | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC | 31 Jul 24 19:26 UTC |
	| start   | -o=json --download-only             | download-only-536524 | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC |                     |
	|         | -p download-only-536524             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC | 31 Jul 24 19:26 UTC |
	| delete  | -p download-only-536524             | download-only-536524 | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC | 31 Jul 24 19:26 UTC |
	| start   | -o=json --download-only             | download-only-974741 | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC |                     |
	|         | -p download-only-974741             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:26:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:26:22.830316  624621 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:26:22.830423  624621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:26:22.830431  624621 out.go:304] Setting ErrFile to fd 2...
	I0731 19:26:22.830435  624621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:26:22.830612  624621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
	I0731 19:26:22.831170  624621 out.go:298] Setting JSON to true
	I0731 19:26:22.832236  624621 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":11327,"bootTime":1722442656,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:26:22.832296  624621 start.go:139] virtualization: kvm guest
	I0731 19:26:22.834489  624621 out.go:97] [download-only-974741] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:26:22.834677  624621 notify.go:220] Checking for updates...
	I0731 19:26:22.836074  624621 out.go:169] MINIKUBE_LOCATION=19355
	I0731 19:26:22.837347  624621 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:26:22.838600  624621 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	I0731 19:26:22.839792  624621 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	I0731 19:26:22.841091  624621 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 19:26:22.843414  624621 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 19:26:22.843657  624621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:26:22.874837  624621 out.go:97] Using the kvm2 driver based on user configuration
	I0731 19:26:22.874867  624621 start.go:297] selected driver: kvm2
	I0731 19:26:22.874873  624621 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:26:22.875224  624621 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:26:22.875301  624621 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19355-616888/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:26:22.890409  624621 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:26:22.890475  624621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:26:22.891094  624621 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 19:26:22.891300  624621 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 19:26:22.891390  624621 cni.go:84] Creating CNI manager for ""
	I0731 19:26:22.891407  624621 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0731 19:26:22.891420  624621 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 19:26:22.891501  624621 start.go:340] cluster config:
	{Name:download-only-974741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-974741 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0731 19:26:22.891654  624621 iso.go:125] acquiring lock: {Name:mkf228a10da0353a91c9c6584611941c9f887339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:26:22.893412  624621 out.go:97] Starting "download-only-974741" primary control-plane node in "download-only-974741" cluster
	I0731 19:26:22.893448  624621 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime containerd
	I0731 19:26:22.994923  624621 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I0731 19:26:22.994964  624621 cache.go:56] Caching tarball of preloaded images
	I0731 19:26:22.995187  624621 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime containerd
	I0731 19:26:22.997023  624621 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 19:26:22.997042  624621 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0731 19:26:23.098556  624621 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:317e542de842a84eade9a0e3b4ea7005 -> /home/jenkins/minikube-integration/19355-616888/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4
	I0731 19:26:32.752253  624621 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0731 19:26:32.752358  624621 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19355-616888/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-amd64.tar.lz4 ...
	I0731 19:26:33.489962  624621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on containerd
	I0731 19:26:33.490346  624621 profile.go:143] Saving config to /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/download-only-974741/config.json ...
	I0731 19:26:33.490385  624621 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/download-only-974741/config.json: {Name:mkadc49fa72495f1bc3760a26314e726db7e5ef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:26:33.490579  624621 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime containerd
	I0731 19:26:33.490745  624621 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19355-616888/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-974741 host does not exist
	  To start a cluster, run: "minikube start -p download-only-974741"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-974741
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-088725 --alsologtostderr --binary-mirror http://127.0.0.1:34475 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-088725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-088725
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (58.45s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-283585 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-283585 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (57.423252749s)
helpers_test.go:175: Cleaning up "offline-containerd-283585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-283585
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-283585: (1.024895245s)
--- PASS: TestOffline (58.45s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-449571
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-449571: exit status 85 (51.292078ms)

                                                
                                                
-- stdout --
	* Profile "addons-449571" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-449571"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-449571
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-449571: exit status 85 (52.487257ms)

                                                
                                                
-- stdout --
	* Profile "addons-449571" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-449571"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (215.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-449571 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-449571 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m35.752485879s)
--- PASS: TestAddons/Setup (215.75s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.65s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 17.992392ms
addons_test.go:897: volcano-scheduler stabilized in 18.050049ms
addons_test.go:905: volcano-admission stabilized in 18.083229ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-lppd6" [c4b0bda4-c990-4a57-ad10-a5418cd61f61] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003736824s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-nb5km" [d9ba54a6-88e7-4f83-87e1-2c271fdee001] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00395089s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-htscz" [590ff355-8919-4d8d-9e1b-1a23fa0c3b82] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.006037289s
addons_test.go:932: (dbg) Run:  kubectl --context addons-449571 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-449571 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-449571 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5891aeba-2f4a-431e-ad5c-3ff7b1e8275f] Pending
helpers_test.go:344: "test-job-nginx-0" [5891aeba-2f4a-431e-ad5c-3ff7b1e8275f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5891aeba-2f4a-431e-ad5c-3ff7b1e8275f] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003979192s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-449571 addons disable volcano --alsologtostderr -v=1: (10.251913496s)
--- PASS: TestAddons/serial/Volcano (40.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-449571 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-449571 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.118014ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-9l7cp" [3b09f907-2299-44d4-a279-33a544ecb1e0] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004103865s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ftc55" [dd25f88b-39a2-47c5-a4cc-7c8e04c50b73] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004087424s
addons_test.go:342: (dbg) Run:  kubectl --context addons-449571 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-449571 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-449571 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.130746795s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 ip
2024/07/31 19:31:35 [DEBUG] GET http://192.168.39.241:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.95s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-449571 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-449571 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-449571 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [941c9522-8fd1-4c8e-8663-6ec43c5c53d6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [941c9522-8fd1-4c8e-8663-6ec43c5c53d6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004379253s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-449571 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.241
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-449571 addons disable ingress-dns --alsologtostderr -v=1: (1.325313703s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-449571 addons disable ingress --alsologtostderr -v=1: (7.788465123s)
--- PASS: TestAddons/parallel/Ingress (20.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.95s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gh4mj" [3126df79-d66d-43d4-8db6-ff5cd3233c72] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.008835713s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-449571
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-449571: (5.94445232s)
--- PASS: TestAddons/parallel/InspektorGadget (11.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.223788ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-bqqjz" [93b8f2b6-1bdf-4d53-a213-31b1a5531c2a] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005737168s
addons_test.go:417: (dbg) Run:  kubectl --context addons-449571 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.71s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.68s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.6253ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-vph66" [76891d5e-7b8a-4e12-919e-627ffed22e76] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.006734189s
addons_test.go:475: (dbg) Run:  kubectl --context addons-449571 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-449571 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.107729059s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.654637ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-449571 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-449571 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [05d428e8-db33-4ba5-8b3d-232180b967cd] Pending
helpers_test.go:344: "task-pv-pod" [05d428e8-db33-4ba5-8b3d-232180b967cd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [05d428e8-db33-4ba5-8b3d-232180b967cd] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004214994s
addons_test.go:590: (dbg) Run:  kubectl --context addons-449571 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-449571 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-449571 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-449571 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-449571 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-449571 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-449571 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [02b675e5-2a5a-41b4-8007-7b08eed2f4c5] Pending
helpers_test.go:344: "task-pv-pod-restore" [02b675e5-2a5a-41b4-8007-7b08eed2f4c5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [02b675e5-2a5a-41b4-8007-7b08eed2f4c5] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005110534s
addons_test.go:632: (dbg) Run:  kubectl --context addons-449571 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-449571 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-449571 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-449571 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.771533229s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (28.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-449571 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-78xjr" [b3c5159a-8106-4686-a5be-6673e551cf15] Pending
helpers_test.go:344: "headlamp-7867546754-78xjr" [b3c5159a-8106-4686-a5be-6673e551cf15] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-78xjr" [b3c5159a-8106-4686-a5be-6673e551cf15] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.00324996s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-449571 addons disable headlamp --alsologtostderr -v=1: (5.757689187s)
--- PASS: TestAddons/parallel/Headlamp (28.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-w7wct" [9affa52d-b0ec-4e10-9955-37ed99b0e9da] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003931878s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-449571
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-449571 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-449571 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-449571 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [76945bb4-a2b9-4107-9aaa-c31087c474c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [76945bb4-a2b9-4107-9aaa-c31087c474c0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [76945bb4-a2b9-4107-9aaa-c31087c474c0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004357526s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-449571 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 ssh "cat /opt/local-path-provisioner/pvc-5af440b9-eda4-4779-90af-bd26d7792960_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-449571 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-449571 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-449571 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.165792743s)
--- PASS: TestAddons/parallel/LocalPath (52.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6dvzl" [c389625b-87a4-4c82-8e0d-326c91d70948] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004874896s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-449571
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-xj4ss" [579f86f6-010b-47f2-9d37-31919ff5ae99] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004205522s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-449571 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-449571 addons disable yakd --alsologtostderr -v=1: (5.777113958s)
--- PASS: TestAddons/parallel/Yakd (11.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.85s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-449571
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-449571: (1m31.572351427s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-449571
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-449571
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-449571
--- PASS: TestAddons/StoppedEnableDisable (91.85s)

                                                
                                    
x
+
TestCertOptions (74.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-722773 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-722773 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m13.554282826s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-722773 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-722773 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-722773 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-722773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-722773
--- PASS: TestCertOptions (74.99s)

                                                
                                    
x
+
TestCertExpiration (307.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-485296 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-485296 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m9.990287552s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-485296 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-485296 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (56.438089249s)
helpers_test.go:175: Cleaning up "cert-expiration-485296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-485296
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-485296: (1.049540223s)
--- PASS: TestCertExpiration (307.48s)

                                                
                                    
x
+
TestForceSystemdFlag (74.27s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-471891 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-471891 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m13.051851558s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-471891 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-471891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-471891
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-471891: (1.01283852s)
--- PASS: TestForceSystemdFlag (74.27s)

                                                
                                    
x
+
TestForceSystemdEnv (96.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-327694 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-327694 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m35.440389129s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-327694 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-327694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-327694
--- PASS: TestForceSystemdEnv (96.63s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.29s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.29s)

                                                
                                    
x
+
TestErrorSpam/setup (38.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-290383 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-290383 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-290383 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-290383 --driver=kvm2  --container-runtime=containerd: (38.533337107s)
--- PASS: TestErrorSpam/setup (38.53s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 status
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (4.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 stop: (1.290593098s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 stop: (1.334838327s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-290383 --log_dir /tmp/nospam-290383 stop: (1.605759583s)
--- PASS: TestErrorSpam/stop (4.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19355-616888/.minikube/files/etc/test/nested/copy/624149/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (96.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-406825 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0731 19:35:19.822657  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:19.828480  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:19.838836  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:19.859150  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:19.899487  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:19.979837  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:20.140331  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:20.461155  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:21.101921  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:22.382465  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:24.943288  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:30.064200  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:35:40.304734  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:36:00.785924  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-406825 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m36.435690144s)
--- PASS: TestFunctional/serial/StartWithProxy (96.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-406825 --alsologtostderr -v=8
E0731 19:36:41.747246  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-406825 --alsologtostderr -v=8: (45.26811445s)
functional_test.go:659: soft start took 45.268837922s for "functional-406825" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-406825 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 cache add registry.k8s.io/pause:3.1: (1.44290212s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 cache add registry.k8s.io/pause:3.3: (1.335248383s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 cache add registry.k8s.io/pause:latest: (1.249300016s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-406825 /tmp/TestFunctionalserialCacheCmdcacheadd_local2483602484/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 cache add minikube-local-cache-test:functional-406825
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 cache add minikube-local-cache-test:functional-406825: (1.817372412s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 cache delete minikube-local-cache-test:functional-406825
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-406825
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-406825 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (212.95972ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 cache reload: (1.058515305s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 kubectl -- --context functional-406825 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-406825 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-406825 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-406825 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.413134703s)
functional_test.go:757: restart took 38.413245223s for "functional-406825" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-406825 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 logs: (1.243956755s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 logs --file /tmp/TestFunctionalserialLogsFileCmd876720665/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 logs --file /tmp/TestFunctionalserialLogsFileCmd876720665/001/logs.txt: (1.299523235s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-406825 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-406825
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-406825: exit status 115 (270.425185ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.243:31214 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-406825 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-406825 config get cpus: exit status 14 (56.72536ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-406825 config get cpus: exit status 14 (42.224439ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-406825 --alsologtostderr -v=1]
E0731 19:38:03.667640  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-406825 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 631826: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-406825 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-406825 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (186.076853ms)

                                                
                                                
-- stdout --
	* [functional-406825] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:38:02.854289  631573 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:38:02.854459  631573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:38:02.854466  631573 out.go:304] Setting ErrFile to fd 2...
	I0731 19:38:02.854471  631573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:38:02.854789  631573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
	I0731 19:38:02.855527  631573 out.go:298] Setting JSON to false
	I0731 19:38:02.856758  631573 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12027,"bootTime":1722442656,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:38:02.856839  631573 start.go:139] virtualization: kvm guest
	I0731 19:38:02.858811  631573 out.go:177] * [functional-406825] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:38:02.860519  631573 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 19:38:02.860521  631573 notify.go:220] Checking for updates...
	I0731 19:38:02.863163  631573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:38:02.864425  631573 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	I0731 19:38:02.865654  631573 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	I0731 19:38:02.883359  631573 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:38:02.884795  631573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:38:02.886544  631573 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0731 19:38:02.887162  631573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:38:02.887229  631573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:38:02.905495  631573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
	I0731 19:38:02.905966  631573 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:38:02.906661  631573 main.go:141] libmachine: Using API Version  1
	I0731 19:38:02.906697  631573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:38:02.907084  631573 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:38:02.907298  631573 main.go:141] libmachine: (functional-406825) Calling .DriverName
	I0731 19:38:02.907592  631573 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:38:02.907902  631573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:38:02.907941  631573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:38:02.924592  631573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46601
	I0731 19:38:02.925087  631573 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:38:02.925656  631573 main.go:141] libmachine: Using API Version  1
	I0731 19:38:02.925697  631573 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:38:02.926267  631573 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:38:02.926455  631573 main.go:141] libmachine: (functional-406825) Calling .DriverName
	I0731 19:38:02.978338  631573 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 19:38:02.979696  631573 start.go:297] selected driver: kvm2
	I0731 19:38:02.979722  631573 start.go:901] validating driver "kvm2" against &{Name:functional-406825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-406825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:38:02.979855  631573 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:38:02.982058  631573 out.go:177] 
	W0731 19:38:02.983214  631573 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 19:38:02.984320  631573 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-406825 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-406825 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-406825 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (182.250108ms)

                                                
                                                
-- stdout --
	* [functional-406825] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:38:02.682693  631501 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:38:02.682966  631501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:38:02.682977  631501 out.go:304] Setting ErrFile to fd 2...
	I0731 19:38:02.682996  631501 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:38:02.683388  631501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
	I0731 19:38:02.684080  631501 out.go:298] Setting JSON to false
	I0731 19:38:02.686006  631501 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":12027,"bootTime":1722442656,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:38:02.686076  631501 start.go:139] virtualization: kvm guest
	I0731 19:38:02.688294  631501 out.go:177] * [functional-406825] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0731 19:38:02.690140  631501 notify.go:220] Checking for updates...
	I0731 19:38:02.690149  631501 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 19:38:02.691381  631501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:38:02.692649  631501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	I0731 19:38:02.693888  631501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	I0731 19:38:02.695003  631501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:38:02.700149  631501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:38:02.701897  631501 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0731 19:38:02.702350  631501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:38:02.702422  631501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:38:02.722840  631501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I0731 19:38:02.723553  631501 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:38:02.724801  631501 main.go:141] libmachine: Using API Version  1
	I0731 19:38:02.724835  631501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:38:02.725536  631501 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:38:02.725851  631501 main.go:141] libmachine: (functional-406825) Calling .DriverName
	I0731 19:38:02.726247  631501 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:38:02.726711  631501 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:38:02.726768  631501 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:38:02.750993  631501 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32977
	I0731 19:38:02.751552  631501 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:38:02.752059  631501 main.go:141] libmachine: Using API Version  1
	I0731 19:38:02.752087  631501 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:38:02.752487  631501 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:38:02.752668  631501 main.go:141] libmachine: (functional-406825) Calling .DriverName
	I0731 19:38:02.790904  631501 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0731 19:38:02.792152  631501 start.go:297] selected driver: kvm2
	I0731 19:38:02.792173  631501 start.go:901] validating driver "kvm2" against &{Name:functional-406825 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19355/minikube-v1.33.1-1722420371-19355-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-406825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:38:02.792331  631501 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:38:02.794772  631501 out.go:177] 
	W0731 19:38:02.796427  631501 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 19:38:02.797819  631501 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [973a1136-6225-4187-9281-07f81c5f86bc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004606707s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-406825 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-406825 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-406825 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-406825 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-406825 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-406825 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-406825 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-406825 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [31e0f872-187d-4688-99d3-c49c6f66beb0] Pending
helpers_test.go:344: "sp-pod" [31e0f872-187d-4688-99d3-c49c6f66beb0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [31e0f872-187d-4688-99d3-c49c6f66beb0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004009879s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-406825 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-406825 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-406825 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [386f82b8-a205-43d6-977b-cb97c7171a1a] Pending
helpers_test.go:344: "sp-pod" [386f82b8-a205-43d6-977b-cb97c7171a1a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [386f82b8-a205-43d6-977b-cb97c7171a1a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004559905s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-406825 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh -n functional-406825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 cp functional-406825:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd700413493/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh -n functional-406825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh -n functional-406825 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-406825 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-w77r4" [33fc257c-36ce-4d7d-a555-802a3b48cba3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-w77r4" [33fc257c-36ce-4d7d-a555-802a3b48cba3] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.00406249s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-406825 exec mysql-64454c8b5c-w77r4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-406825 exec mysql-64454c8b5c-w77r4 -- mysql -ppassword -e "show databases;": exit status 1 (138.878597ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-406825 exec mysql-64454c8b5c-w77r4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-406825 exec mysql-64454c8b5c-w77r4 -- mysql -ppassword -e "show databases;": exit status 1 (143.276633ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-406825 exec mysql-64454c8b5c-w77r4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-406825 exec mysql-64454c8b5c-w77r4 -- mysql -ppassword -e "show databases;": exit status 1 (119.256437ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-406825 exec mysql-64454c8b5c-w77r4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/624149/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "sudo cat /etc/test/nested/copy/624149/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/624149.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "sudo cat /etc/ssl/certs/624149.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/624149.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "sudo cat /usr/share/ca-certificates/624149.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/6241492.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "sudo cat /etc/ssl/certs/6241492.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/6241492.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "sudo cat /usr/share/ca-certificates/6241492.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-406825 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-406825 ssh "sudo systemctl is-active docker": exit status 1 (206.221193ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-406825 ssh "sudo systemctl is-active crio": exit status 1 (194.150351ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-406825 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-406825 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-f9pcn" [96fb448f-604d-4f5d-b1b2-3718b2e771ff] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-f9pcn" [96fb448f-604d-4f5d-b1b2-3718b2e771ff] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004197735s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-406825 /tmp/TestFunctionalparallelMountCmdany-port3093540856/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722454681926885351" to /tmp/TestFunctionalparallelMountCmdany-port3093540856/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722454681926885351" to /tmp/TestFunctionalparallelMountCmdany-port3093540856/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722454681926885351" to /tmp/TestFunctionalparallelMountCmdany-port3093540856/001/test-1722454681926885351
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-406825 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (247.984887ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 19:38 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 19:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 19:38 test-1722454681926885351
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh cat /mount-9p/test-1722454681926885351
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-406825 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b155e1cc-6736-497c-8687-7094e35b8f3c] Pending
helpers_test.go:344: "busybox-mount" [b155e1cc-6736-497c-8687-7094e35b8f3c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b155e1cc-6736-497c-8687-7094e35b8f3c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b155e1cc-6736-497c-8687-7094e35b8f3c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003465971s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-406825 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-406825 /tmp/TestFunctionalparallelMountCmdany-port3093540856/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "265.170198ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "56.06573ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "226.856561ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "46.577741ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-406825 /tmp/TestFunctionalparallelMountCmdspecific-port3260688692/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-406825 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (202.152999ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-406825 /tmp/TestFunctionalparallelMountCmdspecific-port3260688692/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-406825 ssh "sudo umount -f /mount-9p": exit status 1 (204.881194ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-406825 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-406825 /tmp/TestFunctionalparallelMountCmdspecific-port3260688692/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-406825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764648010/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-406825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764648010/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-406825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764648010/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-406825 ssh "findmnt -T" /mount1: exit status 1 (233.372327ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-406825 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-406825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764648010/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-406825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764648010/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-406825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1764648010/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 service list -o json
functional_test.go:1490: Took "875.343302ms" to run "out/minikube-linux-amd64 -p functional-406825 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.243:32494
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.243:32494
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-406825 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-406825
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-406825
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-406825 image ls --format short --alsologtostderr:
I0731 19:38:26.237261  633468 out.go:291] Setting OutFile to fd 1 ...
I0731 19:38:26.237412  633468 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:26.237425  633468 out.go:304] Setting ErrFile to fd 2...
I0731 19:38:26.237431  633468 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:26.237665  633468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
I0731 19:38:26.238257  633468 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0731 19:38:26.238375  633468 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0731 19:38:26.238797  633468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:26.238850  633468 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:26.254714  633468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36181
I0731 19:38:26.255395  633468 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:26.255979  633468 main.go:141] libmachine: Using API Version  1
I0731 19:38:26.256001  633468 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:26.256360  633468 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:26.256591  633468 main.go:141] libmachine: (functional-406825) Calling .GetState
I0731 19:38:26.258523  633468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:26.258568  633468 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:26.276652  633468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46275
I0731 19:38:26.277040  633468 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:26.277619  633468 main.go:141] libmachine: Using API Version  1
I0731 19:38:26.277645  633468 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:26.277988  633468 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:26.278237  633468 main.go:141] libmachine: (functional-406825) Calling .DriverName
I0731 19:38:26.278448  633468 ssh_runner.go:195] Run: systemctl --version
I0731 19:38:26.278481  633468 main.go:141] libmachine: (functional-406825) Calling .GetSSHHostname
I0731 19:38:26.281390  633468 main.go:141] libmachine: (functional-406825) DBG | domain functional-406825 has defined MAC address 52:54:00:70:18:e3 in network mk-functional-406825
I0731 19:38:26.281820  633468 main.go:141] libmachine: (functional-406825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:18:e3", ip: ""} in network mk-functional-406825: {Iface:virbr1 ExpiryTime:2024-07-31 20:34:59 +0000 UTC Type:0 Mac:52:54:00:70:18:e3 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-406825 Clientid:01:52:54:00:70:18:e3}
I0731 19:38:26.281848  633468 main.go:141] libmachine: (functional-406825) DBG | domain functional-406825 has defined IP address 192.168.39.243 and MAC address 52:54:00:70:18:e3 in network mk-functional-406825
I0731 19:38:26.282037  633468 main.go:141] libmachine: (functional-406825) Calling .GetSSHPort
I0731 19:38:26.282217  633468 main.go:141] libmachine: (functional-406825) Calling .GetSSHKeyPath
I0731 19:38:26.282361  633468 main.go:141] libmachine: (functional-406825) Calling .GetSSHUsername
I0731 19:38:26.282518  633468 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-616888/.minikube/machines/functional-406825/id_rsa Username:docker}
I0731 19:38:26.361410  633468 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 19:38:26.422256  633468 main.go:141] libmachine: Making call to close driver server
I0731 19:38:26.422271  633468 main.go:141] libmachine: (functional-406825) Calling .Close
I0731 19:38:26.422636  633468 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:38:26.422707  633468 main.go:141] libmachine: (functional-406825) DBG | Closing plugin on server side
I0731 19:38:26.422720  633468 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:38:26.422731  633468 main.go:141] libmachine: Making call to close driver server
I0731 19:38:26.422740  633468 main.go:141] libmachine: (functional-406825) Calling .Close
I0731 19:38:26.423023  633468 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:38:26.423052  633468 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:38:26.423064  633468 main.go:141] libmachine: (functional-406825) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-406825 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3            | sha256:76932a | 31.1MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20240715-585640e9 | sha256:5cc3ab | 36.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.30.3            | sha256:3edc18 | 19.3MB |
| docker.io/kicbase/echo-server               | functional-406825  | sha256:9056ab | 2.37MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| localhost/my-image                          | functional-406825  | sha256:ca33cb | 775kB  |
| registry.k8s.io/kube-apiserver              | v1.30.3            | sha256:1f6d57 | 32.8MB |
| registry.k8s.io/kube-proxy                  | v1.30.3            | sha256:55bb02 | 29MB   |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| docker.io/library/minikube-local-cache-test | functional-406825  | sha256:74d917 | 992B   |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-406825 image ls --format table --alsologtostderr:
I0731 19:38:31.629541  633724 out.go:291] Setting OutFile to fd 1 ...
I0731 19:38:31.629649  633724 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:31.629657  633724 out.go:304] Setting ErrFile to fd 2...
I0731 19:38:31.629661  633724 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:31.629841  633724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
I0731 19:38:31.630397  633724 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0731 19:38:31.630502  633724 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0731 19:38:31.630857  633724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:31.630903  633724 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:31.646022  633724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33289
I0731 19:38:31.646520  633724 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:31.647081  633724 main.go:141] libmachine: Using API Version  1
I0731 19:38:31.647105  633724 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:31.647486  633724 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:31.647702  633724 main.go:141] libmachine: (functional-406825) Calling .GetState
I0731 19:38:31.649741  633724 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:31.649789  633724 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:31.664807  633724 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
I0731 19:38:31.665255  633724 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:31.665851  633724 main.go:141] libmachine: Using API Version  1
I0731 19:38:31.665884  633724 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:31.666231  633724 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:31.666441  633724 main.go:141] libmachine: (functional-406825) Calling .DriverName
I0731 19:38:31.666685  633724 ssh_runner.go:195] Run: systemctl --version
I0731 19:38:31.666733  633724 main.go:141] libmachine: (functional-406825) Calling .GetSSHHostname
I0731 19:38:31.669763  633724 main.go:141] libmachine: (functional-406825) DBG | domain functional-406825 has defined MAC address 52:54:00:70:18:e3 in network mk-functional-406825
I0731 19:38:31.670377  633724 main.go:141] libmachine: (functional-406825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:18:e3", ip: ""} in network mk-functional-406825: {Iface:virbr1 ExpiryTime:2024-07-31 20:34:59 +0000 UTC Type:0 Mac:52:54:00:70:18:e3 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-406825 Clientid:01:52:54:00:70:18:e3}
I0731 19:38:31.670420  633724 main.go:141] libmachine: (functional-406825) DBG | domain functional-406825 has defined IP address 192.168.39.243 and MAC address 52:54:00:70:18:e3 in network mk-functional-406825
I0731 19:38:31.670582  633724 main.go:141] libmachine: (functional-406825) Calling .GetSSHPort
I0731 19:38:31.670754  633724 main.go:141] libmachine: (functional-406825) Calling .GetSSHKeyPath
I0731 19:38:31.670892  633724 main.go:141] libmachine: (functional-406825) Calling .GetSSHUsername
I0731 19:38:31.671076  633724 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-616888/.minikube/machines/functional-406825/id_rsa Username:docker}
I0731 19:38:31.784781  633724 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 19:38:31.885962  633724 main.go:141] libmachine: Making call to close driver server
I0731 19:38:31.885989  633724 main.go:141] libmachine: (functional-406825) Calling .Close
I0731 19:38:31.886323  633724 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:38:31.886351  633724 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:38:31.886432  633724 main.go:141] libmachine: (functional-406825) DBG | Closing plugin on server side
I0731 19:38:31.886459  633724 main.go:141] libmachine: Making call to close driver server
I0731 19:38:31.886475  633724 main.go:141] libmachine: (functional-406825) Calling .Close
I0731 19:38:31.886694  633724 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:38:31.886712  633724 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-406825 image ls --format json --alsologtostderr:
[{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"31139481"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb
4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:ca33cbd93a7d78edf7bbc4ba7f5ceaab13402bd5e08d57b6fd628cf608e9d127","repoDigests":[],"repoTags":["localhost/my-image:functional-406825"],"size":"774887"},{"id":"sha256:74d917eab1d3c367645c1b72ab3062e5cfe7ac45980a657f15be2a8e07248fad","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-406825"],"size":"992"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["do
cker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"32770038"},{"id":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c
73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"19329508"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-406825"],"size":"2372971"},{"id":"sha256:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"36775157"},{"id":"sha256:115053965e86b2df4d
78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"29035454"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-406825 image ls --format json --alsologtostderr:
I0731 19:38:31.391706  633678 out.go:291] Setting OutFile to fd 1 ...
I0731 19:38:31.392111  633678 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:31.392130  633678 out.go:304] Setting ErrFile to fd 2...
I0731 19:38:31.392137  633678 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:31.392606  633678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
I0731 19:38:31.393715  633678 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0731 19:38:31.393911  633678 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0731 19:38:31.394287  633678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:31.394337  633678 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:31.410235  633678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33897
I0731 19:38:31.410736  633678 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:31.411351  633678 main.go:141] libmachine: Using API Version  1
I0731 19:38:31.411384  633678 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:31.411811  633678 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:31.412039  633678 main.go:141] libmachine: (functional-406825) Calling .GetState
I0731 19:38:31.414003  633678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:31.414057  633678 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:31.430400  633678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37521
I0731 19:38:31.430864  633678 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:31.431506  633678 main.go:141] libmachine: Using API Version  1
I0731 19:38:31.431529  633678 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:31.433194  633678 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:31.433455  633678 main.go:141] libmachine: (functional-406825) Calling .DriverName
I0731 19:38:31.433733  633678 ssh_runner.go:195] Run: systemctl --version
I0731 19:38:31.433771  633678 main.go:141] libmachine: (functional-406825) Calling .GetSSHHostname
I0731 19:38:31.436829  633678 main.go:141] libmachine: (functional-406825) DBG | domain functional-406825 has defined MAC address 52:54:00:70:18:e3 in network mk-functional-406825
I0731 19:38:31.437256  633678 main.go:141] libmachine: (functional-406825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:18:e3", ip: ""} in network mk-functional-406825: {Iface:virbr1 ExpiryTime:2024-07-31 20:34:59 +0000 UTC Type:0 Mac:52:54:00:70:18:e3 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-406825 Clientid:01:52:54:00:70:18:e3}
I0731 19:38:31.437284  633678 main.go:141] libmachine: (functional-406825) DBG | domain functional-406825 has defined IP address 192.168.39.243 and MAC address 52:54:00:70:18:e3 in network mk-functional-406825
I0731 19:38:31.437448  633678 main.go:141] libmachine: (functional-406825) Calling .GetSSHPort
I0731 19:38:31.437626  633678 main.go:141] libmachine: (functional-406825) Calling .GetSSHKeyPath
I0731 19:38:31.437814  633678 main.go:141] libmachine: (functional-406825) Calling .GetSSHUsername
I0731 19:38:31.438033  633678 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-616888/.minikube/machines/functional-406825/id_rsa Username:docker}
I0731 19:38:31.523908  633678 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 19:38:31.579292  633678 main.go:141] libmachine: Making call to close driver server
I0731 19:38:31.579310  633678 main.go:141] libmachine: (functional-406825) Calling .Close
I0731 19:38:31.579626  633678 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:38:31.579647  633678 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:38:31.579655  633678 main.go:141] libmachine: Making call to close driver server
I0731 19:38:31.579661  633678 main.go:141] libmachine: (functional-406825) Calling .Close
I0731 19:38:31.579682  633678 main.go:141] libmachine: (functional-406825) DBG | Closing plugin on server side
I0731 19:38:31.579868  633678 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:38:31.579884  633678 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-406825 image ls --format yaml --alsologtostderr:
- id: sha256:74d917eab1d3c367645c1b72ab3062e5cfe7ac45980a657f15be2a8e07248fad
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-406825
size: "992"
- id: sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "31139481"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "36775157"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "19329508"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-406825
size: "2372971"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "32770038"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "29035454"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-406825 image ls --format yaml --alsologtostderr:
I0731 19:38:26.475041  633502 out.go:291] Setting OutFile to fd 1 ...
I0731 19:38:26.475356  633502 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:26.475367  633502 out.go:304] Setting ErrFile to fd 2...
I0731 19:38:26.475372  633502 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:26.475637  633502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
I0731 19:38:26.476260  633502 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0731 19:38:26.476381  633502 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0731 19:38:26.476789  633502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:26.476846  633502 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:26.494546  633502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
I0731 19:38:26.495060  633502 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:26.495672  633502 main.go:141] libmachine: Using API Version  1
I0731 19:38:26.495698  633502 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:26.496100  633502 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:26.496327  633502 main.go:141] libmachine: (functional-406825) Calling .GetState
I0731 19:38:26.498176  633502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:26.498219  633502 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:26.513475  633502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
I0731 19:38:26.513995  633502 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:26.514475  633502 main.go:141] libmachine: Using API Version  1
I0731 19:38:26.514498  633502 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:26.514879  633502 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:26.515053  633502 main.go:141] libmachine: (functional-406825) Calling .DriverName
I0731 19:38:26.515292  633502 ssh_runner.go:195] Run: systemctl --version
I0731 19:38:26.515322  633502 main.go:141] libmachine: (functional-406825) Calling .GetSSHHostname
I0731 19:38:26.518143  633502 main.go:141] libmachine: (functional-406825) DBG | domain functional-406825 has defined MAC address 52:54:00:70:18:e3 in network mk-functional-406825
I0731 19:38:26.518606  633502 main.go:141] libmachine: (functional-406825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:18:e3", ip: ""} in network mk-functional-406825: {Iface:virbr1 ExpiryTime:2024-07-31 20:34:59 +0000 UTC Type:0 Mac:52:54:00:70:18:e3 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-406825 Clientid:01:52:54:00:70:18:e3}
I0731 19:38:26.518628  633502 main.go:141] libmachine: (functional-406825) DBG | domain functional-406825 has defined IP address 192.168.39.243 and MAC address 52:54:00:70:18:e3 in network mk-functional-406825
I0731 19:38:26.518785  633502 main.go:141] libmachine: (functional-406825) Calling .GetSSHPort
I0731 19:38:26.518978  633502 main.go:141] libmachine: (functional-406825) Calling .GetSSHKeyPath
I0731 19:38:26.519178  633502 main.go:141] libmachine: (functional-406825) Calling .GetSSHUsername
I0731 19:38:26.519342  633502 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-616888/.minikube/machines/functional-406825/id_rsa Username:docker}
I0731 19:38:26.604366  633502 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 19:38:26.655705  633502 main.go:141] libmachine: Making call to close driver server
I0731 19:38:26.655724  633502 main.go:141] libmachine: (functional-406825) Calling .Close
I0731 19:38:26.656049  633502 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:38:26.656068  633502 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:38:26.656077  633502 main.go:141] libmachine: Making call to close driver server
I0731 19:38:26.656085  633502 main.go:141] libmachine: (functional-406825) Calling .Close
I0731 19:38:26.656513  633502 main.go:141] libmachine: (functional-406825) DBG | Closing plugin on server side
I0731 19:38:26.656512  633502 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:38:26.656545  633502 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-406825 ssh pgrep buildkitd: exit status 1 (196.770283ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image build -t localhost/my-image:functional-406825 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 image build -t localhost/my-image:functional-406825 testdata/build --alsologtostderr: (4.184572327s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-406825 image build -t localhost/my-image:functional-406825 testdata/build --alsologtostderr:
I0731 19:38:26.973778  633556 out.go:291] Setting OutFile to fd 1 ...
I0731 19:38:26.974111  633556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:26.974123  633556 out.go:304] Setting ErrFile to fd 2...
I0731 19:38:26.974130  633556 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 19:38:26.974420  633556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
I0731 19:38:26.975183  633556 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0731 19:38:26.975819  633556 config.go:182] Loaded profile config "functional-406825": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0731 19:38:26.976175  633556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:26.976238  633556 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:26.992240  633556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38711
I0731 19:38:26.992806  633556 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:26.993458  633556 main.go:141] libmachine: Using API Version  1
I0731 19:38:26.993487  633556 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:26.993902  633556 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:26.994134  633556 main.go:141] libmachine: (functional-406825) Calling .GetState
I0731 19:38:26.996192  633556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0731 19:38:26.996249  633556 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 19:38:27.011465  633556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
I0731 19:38:27.011971  633556 main.go:141] libmachine: () Calling .GetVersion
I0731 19:38:27.012560  633556 main.go:141] libmachine: Using API Version  1
I0731 19:38:27.012587  633556 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 19:38:27.012936  633556 main.go:141] libmachine: () Calling .GetMachineName
I0731 19:38:27.013147  633556 main.go:141] libmachine: (functional-406825) Calling .DriverName
I0731 19:38:27.013406  633556 ssh_runner.go:195] Run: systemctl --version
I0731 19:38:27.013435  633556 main.go:141] libmachine: (functional-406825) Calling .GetSSHHostname
I0731 19:38:27.016102  633556 main.go:141] libmachine: (functional-406825) DBG | domain functional-406825 has defined MAC address 52:54:00:70:18:e3 in network mk-functional-406825
I0731 19:38:27.016632  633556 main.go:141] libmachine: (functional-406825) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:18:e3", ip: ""} in network mk-functional-406825: {Iface:virbr1 ExpiryTime:2024-07-31 20:34:59 +0000 UTC Type:0 Mac:52:54:00:70:18:e3 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:functional-406825 Clientid:01:52:54:00:70:18:e3}
I0731 19:38:27.016668  633556 main.go:141] libmachine: (functional-406825) DBG | domain functional-406825 has defined IP address 192.168.39.243 and MAC address 52:54:00:70:18:e3 in network mk-functional-406825
I0731 19:38:27.016914  633556 main.go:141] libmachine: (functional-406825) Calling .GetSSHPort
I0731 19:38:27.017092  633556 main.go:141] libmachine: (functional-406825) Calling .GetSSHKeyPath
I0731 19:38:27.017266  633556 main.go:141] libmachine: (functional-406825) Calling .GetSSHUsername
I0731 19:38:27.017437  633556 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-616888/.minikube/machines/functional-406825/id_rsa Username:docker}
I0731 19:38:27.093922  633556 build_images.go:161] Building image from path: /tmp/build.1431572520.tar
I0731 19:38:27.094031  633556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 19:38:27.104227  633556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1431572520.tar
I0731 19:38:27.108672  633556 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1431572520.tar: stat -c "%s %y" /var/lib/minikube/build/build.1431572520.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1431572520.tar': No such file or directory
I0731 19:38:27.108709  633556 ssh_runner.go:362] scp /tmp/build.1431572520.tar --> /var/lib/minikube/build/build.1431572520.tar (3072 bytes)
I0731 19:38:27.136706  633556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1431572520
I0731 19:38:27.146548  633556 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1431572520 -xf /var/lib/minikube/build/build.1431572520.tar
I0731 19:38:27.159614  633556 containerd.go:394] Building image: /var/lib/minikube/build/build.1431572520
I0731 19:38:27.159717  633556 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1431572520 --local dockerfile=/var/lib/minikube/build/build.1431572520 --output type=image,name=localhost/my-image:functional-406825
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:280bb18dad05215a89df120c7965b83a6f5346b7757a91264c2646411ea38a27 0.0s done
#8 exporting config sha256:ca33cbd93a7d78edf7bbc4ba7f5ceaab13402bd5e08d57b6fd628cf608e9d127 0.0s done
#8 naming to localhost/my-image:functional-406825 done
#8 DONE 0.2s
I0731 19:38:31.062370  633556 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1431572520 --local dockerfile=/var/lib/minikube/build/build.1431572520 --output type=image,name=localhost/my-image:functional-406825: (3.902591907s)
I0731 19:38:31.062468  633556 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1431572520
I0731 19:38:31.077404  633556 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1431572520.tar
I0731 19:38:31.103893  633556 build_images.go:217] Built localhost/my-image:functional-406825 from /tmp/build.1431572520.tar
I0731 19:38:31.103933  633556 build_images.go:133] succeeded building to: functional-406825
I0731 19:38:31.103940  633556 build_images.go:134] failed building to: 
I0731 19:38:31.103973  633556 main.go:141] libmachine: Making call to close driver server
I0731 19:38:31.103989  633556 main.go:141] libmachine: (functional-406825) Calling .Close
I0731 19:38:31.104348  633556 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:38:31.104368  633556 main.go:141] libmachine: (functional-406825) DBG | Closing plugin on server side
I0731 19:38:31.104372  633556 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 19:38:31.104388  633556 main.go:141] libmachine: Making call to close driver server
I0731 19:38:31.104397  633556 main.go:141] libmachine: (functional-406825) Calling .Close
I0731 19:38:31.104660  633556 main.go:141] libmachine: (functional-406825) DBG | Closing plugin on server side
I0731 19:38:31.104707  633556 main.go:141] libmachine: Successfully made call to close driver server
I0731 19:38:31.104741  633556 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
2024/07/31 19:38:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.734439534s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-406825
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image load --daemon docker.io/kicbase/echo-server:functional-406825 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 image load --daemon docker.io/kicbase/echo-server:functional-406825 --alsologtostderr: (1.593595148s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image load --daemon docker.io/kicbase/echo-server:functional-406825 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-406825
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image load --daemon docker.io/kicbase/echo-server:functional-406825 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 image load --daemon docker.io/kicbase/echo-server:functional-406825 --alsologtostderr: (1.155255546s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image save docker.io/kicbase/echo-server:functional-406825 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image rm docker.io/kicbase/echo-server:functional-406825 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-406825 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr: (1.009148501s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-406825
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-406825 image save --daemon docker.io/kicbase/echo-server:functional-406825 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-406825
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-406825
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-406825
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-406825
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (223.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-628749 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0731 19:40:19.822626  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:40:47.509204  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-628749 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m43.096537591s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (223.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-628749 -- rollout status deployment/busybox: (3.785137985s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-b6t6q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-cf4dq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-j5zcg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-b6t6q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-cf4dq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-j5zcg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-b6t6q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-cf4dq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-j5zcg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-b6t6q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-b6t6q -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-cf4dq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-cf4dq -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-j5zcg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628749 -- exec busybox-fc5497c4f-j5zcg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-628749 -v=7 --alsologtostderr
E0731 19:43:01.522663  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:01.527986  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:01.538264  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:01.558633  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:01.598985  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:01.679376  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:01.839849  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:02.160357  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:02.800918  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:04.081379  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:06.641577  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:11.761744  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:22.002280  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:43:42.483024  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-628749 -v=7 --alsologtostderr: (54.626441575s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-628749 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp testdata/cp-test.txt ha-628749:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile448230507/001/cp-test_ha-628749.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749:/home/docker/cp-test.txt ha-628749-m02:/home/docker/cp-test_ha-628749_ha-628749-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m02 "sudo cat /home/docker/cp-test_ha-628749_ha-628749-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749:/home/docker/cp-test.txt ha-628749-m03:/home/docker/cp-test_ha-628749_ha-628749-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m03 "sudo cat /home/docker/cp-test_ha-628749_ha-628749-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749:/home/docker/cp-test.txt ha-628749-m04:/home/docker/cp-test_ha-628749_ha-628749-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m04 "sudo cat /home/docker/cp-test_ha-628749_ha-628749-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp testdata/cp-test.txt ha-628749-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile448230507/001/cp-test_ha-628749-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m02:/home/docker/cp-test.txt ha-628749:/home/docker/cp-test_ha-628749-m02_ha-628749.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749 "sudo cat /home/docker/cp-test_ha-628749-m02_ha-628749.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m02:/home/docker/cp-test.txt ha-628749-m03:/home/docker/cp-test_ha-628749-m02_ha-628749-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m03 "sudo cat /home/docker/cp-test_ha-628749-m02_ha-628749-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m02:/home/docker/cp-test.txt ha-628749-m04:/home/docker/cp-test_ha-628749-m02_ha-628749-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m04 "sudo cat /home/docker/cp-test_ha-628749-m02_ha-628749-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp testdata/cp-test.txt ha-628749-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile448230507/001/cp-test_ha-628749-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m03:/home/docker/cp-test.txt ha-628749:/home/docker/cp-test_ha-628749-m03_ha-628749.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749 "sudo cat /home/docker/cp-test_ha-628749-m03_ha-628749.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m03:/home/docker/cp-test.txt ha-628749-m02:/home/docker/cp-test_ha-628749-m03_ha-628749-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m02 "sudo cat /home/docker/cp-test_ha-628749-m03_ha-628749-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m03:/home/docker/cp-test.txt ha-628749-m04:/home/docker/cp-test_ha-628749-m03_ha-628749-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m04 "sudo cat /home/docker/cp-test_ha-628749-m03_ha-628749-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp testdata/cp-test.txt ha-628749-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile448230507/001/cp-test_ha-628749-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m04:/home/docker/cp-test.txt ha-628749:/home/docker/cp-test_ha-628749-m04_ha-628749.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749 "sudo cat /home/docker/cp-test_ha-628749-m04_ha-628749.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m04:/home/docker/cp-test.txt ha-628749-m02:/home/docker/cp-test_ha-628749-m04_ha-628749-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m02 "sudo cat /home/docker/cp-test_ha-628749-m04_ha-628749-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 cp ha-628749-m04:/home/docker/cp-test.txt ha-628749-m03:/home/docker/cp-test_ha-628749-m04_ha-628749-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 ssh -n ha-628749-m03 "sudo cat /home/docker/cp-test_ha-628749-m04_ha-628749-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (92.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 node stop m02 -v=7 --alsologtostderr
E0731 19:44:23.443486  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:45:19.822133  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-628749 node stop m02 -v=7 --alsologtostderr: (1m31.435329661s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-628749 status -v=7 --alsologtostderr: exit status 7 (620.899253ms)

                                                
                                                
-- stdout --
	ha-628749
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-628749-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-628749-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-628749-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:45:35.124169  638572 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:45:35.124430  638572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:45:35.124439  638572 out.go:304] Setting ErrFile to fd 2...
	I0731 19:45:35.124443  638572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:45:35.124610  638572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
	I0731 19:45:35.124798  638572 out.go:298] Setting JSON to false
	I0731 19:45:35.124833  638572 mustload.go:65] Loading cluster: ha-628749
	I0731 19:45:35.124859  638572 notify.go:220] Checking for updates...
	I0731 19:45:35.125273  638572 config.go:182] Loaded profile config "ha-628749": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0731 19:45:35.125296  638572 status.go:255] checking status of ha-628749 ...
	I0731 19:45:35.125829  638572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:45:35.125880  638572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:35.149916  638572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I0731 19:45:35.150451  638572 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:35.151200  638572 main.go:141] libmachine: Using API Version  1
	I0731 19:45:35.151230  638572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:35.151619  638572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:35.151809  638572 main.go:141] libmachine: (ha-628749) Calling .GetState
	I0731 19:45:35.153460  638572 status.go:330] ha-628749 host status = "Running" (err=<nil>)
	I0731 19:45:35.153479  638572 host.go:66] Checking if "ha-628749" exists ...
	I0731 19:45:35.153767  638572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:45:35.153817  638572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:35.169607  638572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40629
	I0731 19:45:35.170001  638572 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:35.170504  638572 main.go:141] libmachine: Using API Version  1
	I0731 19:45:35.170526  638572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:35.170821  638572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:35.171019  638572 main.go:141] libmachine: (ha-628749) Calling .GetIP
	I0731 19:45:35.173781  638572 main.go:141] libmachine: (ha-628749) DBG | domain ha-628749 has defined MAC address 52:54:00:16:dc:ac in network mk-ha-628749
	I0731 19:45:35.174200  638572 main.go:141] libmachine: (ha-628749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:dc:ac", ip: ""} in network mk-ha-628749: {Iface:virbr1 ExpiryTime:2024-07-31 20:39:18 +0000 UTC Type:0 Mac:52:54:00:16:dc:ac Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-628749 Clientid:01:52:54:00:16:dc:ac}
	I0731 19:45:35.174230  638572 main.go:141] libmachine: (ha-628749) DBG | domain ha-628749 has defined IP address 192.168.39.174 and MAC address 52:54:00:16:dc:ac in network mk-ha-628749
	I0731 19:45:35.174395  638572 host.go:66] Checking if "ha-628749" exists ...
	I0731 19:45:35.174691  638572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:45:35.174737  638572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:35.189986  638572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0731 19:45:35.190436  638572 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:35.190892  638572 main.go:141] libmachine: Using API Version  1
	I0731 19:45:35.190914  638572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:35.191238  638572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:35.191424  638572 main.go:141] libmachine: (ha-628749) Calling .DriverName
	I0731 19:45:35.191650  638572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:45:35.191674  638572 main.go:141] libmachine: (ha-628749) Calling .GetSSHHostname
	I0731 19:45:35.194176  638572 main.go:141] libmachine: (ha-628749) DBG | domain ha-628749 has defined MAC address 52:54:00:16:dc:ac in network mk-ha-628749
	I0731 19:45:35.194545  638572 main.go:141] libmachine: (ha-628749) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:dc:ac", ip: ""} in network mk-ha-628749: {Iface:virbr1 ExpiryTime:2024-07-31 20:39:18 +0000 UTC Type:0 Mac:52:54:00:16:dc:ac Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:ha-628749 Clientid:01:52:54:00:16:dc:ac}
	I0731 19:45:35.194582  638572 main.go:141] libmachine: (ha-628749) DBG | domain ha-628749 has defined IP address 192.168.39.174 and MAC address 52:54:00:16:dc:ac in network mk-ha-628749
	I0731 19:45:35.194782  638572 main.go:141] libmachine: (ha-628749) Calling .GetSSHPort
	I0731 19:45:35.194957  638572 main.go:141] libmachine: (ha-628749) Calling .GetSSHKeyPath
	I0731 19:45:35.195055  638572 main.go:141] libmachine: (ha-628749) Calling .GetSSHUsername
	I0731 19:45:35.195217  638572 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-616888/.minikube/machines/ha-628749/id_rsa Username:docker}
	I0731 19:45:35.275667  638572 ssh_runner.go:195] Run: systemctl --version
	I0731 19:45:35.282267  638572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:45:35.296832  638572 kubeconfig.go:125] found "ha-628749" server: "https://192.168.39.254:8443"
	I0731 19:45:35.296865  638572 api_server.go:166] Checking apiserver status ...
	I0731 19:45:35.296917  638572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:45:35.310417  638572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1173/cgroup
	W0731 19:45:35.319039  638572 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1173/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:45:35.319135  638572 ssh_runner.go:195] Run: ls
	I0731 19:45:35.324070  638572 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:45:35.329481  638572 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:45:35.329503  638572 status.go:422] ha-628749 apiserver status = Running (err=<nil>)
	I0731 19:45:35.329512  638572 status.go:257] ha-628749 status: &{Name:ha-628749 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:45:35.329529  638572 status.go:255] checking status of ha-628749-m02 ...
	I0731 19:45:35.329815  638572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:45:35.329849  638572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:35.345539  638572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45991
	I0731 19:45:35.346053  638572 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:35.346622  638572 main.go:141] libmachine: Using API Version  1
	I0731 19:45:35.346644  638572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:35.346959  638572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:35.347200  638572 main.go:141] libmachine: (ha-628749-m02) Calling .GetState
	I0731 19:45:35.348877  638572 status.go:330] ha-628749-m02 host status = "Stopped" (err=<nil>)
	I0731 19:45:35.348892  638572 status.go:343] host is not running, skipping remaining checks
	I0731 19:45:35.348900  638572 status.go:257] ha-628749-m02 status: &{Name:ha-628749-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:45:35.348920  638572 status.go:255] checking status of ha-628749-m03 ...
	I0731 19:45:35.349206  638572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:45:35.349248  638572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:35.364482  638572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33067
	I0731 19:45:35.364964  638572 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:35.365469  638572 main.go:141] libmachine: Using API Version  1
	I0731 19:45:35.365495  638572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:35.365871  638572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:35.366072  638572 main.go:141] libmachine: (ha-628749-m03) Calling .GetState
	I0731 19:45:35.367736  638572 status.go:330] ha-628749-m03 host status = "Running" (err=<nil>)
	I0731 19:45:35.367756  638572 host.go:66] Checking if "ha-628749-m03" exists ...
	I0731 19:45:35.368125  638572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:45:35.368162  638572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:35.383340  638572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38703
	I0731 19:45:35.383805  638572 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:35.384318  638572 main.go:141] libmachine: Using API Version  1
	I0731 19:45:35.384350  638572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:35.384652  638572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:35.384830  638572 main.go:141] libmachine: (ha-628749-m03) Calling .GetIP
	I0731 19:45:35.387489  638572 main.go:141] libmachine: (ha-628749-m03) DBG | domain ha-628749-m03 has defined MAC address 52:54:00:57:7a:21 in network mk-ha-628749
	I0731 19:45:35.387913  638572 main.go:141] libmachine: (ha-628749-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:7a:21", ip: ""} in network mk-ha-628749: {Iface:virbr1 ExpiryTime:2024-07-31 20:41:50 +0000 UTC Type:0 Mac:52:54:00:57:7a:21 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-628749-m03 Clientid:01:52:54:00:57:7a:21}
	I0731 19:45:35.387950  638572 main.go:141] libmachine: (ha-628749-m03) DBG | domain ha-628749-m03 has defined IP address 192.168.39.42 and MAC address 52:54:00:57:7a:21 in network mk-ha-628749
	I0731 19:45:35.388046  638572 host.go:66] Checking if "ha-628749-m03" exists ...
	I0731 19:45:35.388369  638572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:45:35.388431  638572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:35.403496  638572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33775
	I0731 19:45:35.403989  638572 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:35.404473  638572 main.go:141] libmachine: Using API Version  1
	I0731 19:45:35.404498  638572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:35.404784  638572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:35.404977  638572 main.go:141] libmachine: (ha-628749-m03) Calling .DriverName
	I0731 19:45:35.405180  638572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:45:35.405202  638572 main.go:141] libmachine: (ha-628749-m03) Calling .GetSSHHostname
	I0731 19:45:35.408040  638572 main.go:141] libmachine: (ha-628749-m03) DBG | domain ha-628749-m03 has defined MAC address 52:54:00:57:7a:21 in network mk-ha-628749
	I0731 19:45:35.408477  638572 main.go:141] libmachine: (ha-628749-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:7a:21", ip: ""} in network mk-ha-628749: {Iface:virbr1 ExpiryTime:2024-07-31 20:41:50 +0000 UTC Type:0 Mac:52:54:00:57:7a:21 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-628749-m03 Clientid:01:52:54:00:57:7a:21}
	I0731 19:45:35.408512  638572 main.go:141] libmachine: (ha-628749-m03) DBG | domain ha-628749-m03 has defined IP address 192.168.39.42 and MAC address 52:54:00:57:7a:21 in network mk-ha-628749
	I0731 19:45:35.408633  638572 main.go:141] libmachine: (ha-628749-m03) Calling .GetSSHPort
	I0731 19:45:35.408783  638572 main.go:141] libmachine: (ha-628749-m03) Calling .GetSSHKeyPath
	I0731 19:45:35.408954  638572 main.go:141] libmachine: (ha-628749-m03) Calling .GetSSHUsername
	I0731 19:45:35.409173  638572 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-616888/.minikube/machines/ha-628749-m03/id_rsa Username:docker}
	I0731 19:45:35.487516  638572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:45:35.504179  638572 kubeconfig.go:125] found "ha-628749" server: "https://192.168.39.254:8443"
	I0731 19:45:35.504209  638572 api_server.go:166] Checking apiserver status ...
	I0731 19:45:35.504240  638572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:45:35.518996  638572 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup
	W0731 19:45:35.528937  638572 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:45:35.528986  638572 ssh_runner.go:195] Run: ls
	I0731 19:45:35.533365  638572 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 19:45:35.538107  638572 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 19:45:35.538135  638572 status.go:422] ha-628749-m03 apiserver status = Running (err=<nil>)
	I0731 19:45:35.538144  638572 status.go:257] ha-628749-m03 status: &{Name:ha-628749-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:45:35.538160  638572 status.go:255] checking status of ha-628749-m04 ...
	I0731 19:45:35.538481  638572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:45:35.538535  638572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:35.553643  638572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0731 19:45:35.554056  638572 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:35.554527  638572 main.go:141] libmachine: Using API Version  1
	I0731 19:45:35.554548  638572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:35.554873  638572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:35.555060  638572 main.go:141] libmachine: (ha-628749-m04) Calling .GetState
	I0731 19:45:35.556589  638572 status.go:330] ha-628749-m04 host status = "Running" (err=<nil>)
	I0731 19:45:35.556618  638572 host.go:66] Checking if "ha-628749-m04" exists ...
	I0731 19:45:35.556907  638572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:45:35.556948  638572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:35.574460  638572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I0731 19:45:35.574903  638572 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:35.575464  638572 main.go:141] libmachine: Using API Version  1
	I0731 19:45:35.575487  638572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:35.575863  638572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:35.576058  638572 main.go:141] libmachine: (ha-628749-m04) Calling .GetIP
	I0731 19:45:35.579563  638572 main.go:141] libmachine: (ha-628749-m04) DBG | domain ha-628749-m04 has defined MAC address 52:54:00:04:83:e8 in network mk-ha-628749
	I0731 19:45:35.580114  638572 main.go:141] libmachine: (ha-628749-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:83:e8", ip: ""} in network mk-ha-628749: {Iface:virbr1 ExpiryTime:2024-07-31 20:43:09 +0000 UTC Type:0 Mac:52:54:00:04:83:e8 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-628749-m04 Clientid:01:52:54:00:04:83:e8}
	I0731 19:45:35.580141  638572 main.go:141] libmachine: (ha-628749-m04) DBG | domain ha-628749-m04 has defined IP address 192.168.39.133 and MAC address 52:54:00:04:83:e8 in network mk-ha-628749
	I0731 19:45:35.580337  638572 host.go:66] Checking if "ha-628749-m04" exists ...
	I0731 19:45:35.580677  638572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:45:35.580724  638572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:45:35.596275  638572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0731 19:45:35.596776  638572 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:45:35.597289  638572 main.go:141] libmachine: Using API Version  1
	I0731 19:45:35.597311  638572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:45:35.597665  638572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:45:35.597875  638572 main.go:141] libmachine: (ha-628749-m04) Calling .DriverName
	I0731 19:45:35.598064  638572 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:45:35.598086  638572 main.go:141] libmachine: (ha-628749-m04) Calling .GetSSHHostname
	I0731 19:45:35.600969  638572 main.go:141] libmachine: (ha-628749-m04) DBG | domain ha-628749-m04 has defined MAC address 52:54:00:04:83:e8 in network mk-ha-628749
	I0731 19:45:35.601411  638572 main.go:141] libmachine: (ha-628749-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:04:83:e8", ip: ""} in network mk-ha-628749: {Iface:virbr1 ExpiryTime:2024-07-31 20:43:09 +0000 UTC Type:0 Mac:52:54:00:04:83:e8 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-628749-m04 Clientid:01:52:54:00:04:83:e8}
	I0731 19:45:35.601449  638572 main.go:141] libmachine: (ha-628749-m04) DBG | domain ha-628749-m04 has defined IP address 192.168.39.133 and MAC address 52:54:00:04:83:e8 in network mk-ha-628749
	I0731 19:45:35.601587  638572 main.go:141] libmachine: (ha-628749-m04) Calling .GetSSHPort
	I0731 19:45:35.601762  638572 main.go:141] libmachine: (ha-628749-m04) Calling .GetSSHKeyPath
	I0731 19:45:35.601893  638572 main.go:141] libmachine: (ha-628749-m04) Calling .GetSSHUsername
	I0731 19:45:35.602053  638572 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-616888/.minikube/machines/ha-628749-m04/id_rsa Username:docker}
	I0731 19:45:35.682620  638572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:45:35.697525  638572 status.go:257] ha-628749-m04 status: &{Name:ha-628749-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (92.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 node start m02 -v=7 --alsologtostderr
E0731 19:45:45.364006  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-628749 node start m02 -v=7 --alsologtostderr: (35.900747165s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (414.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-628749 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-628749 -v=7 --alsologtostderr
E0731 19:48:01.522496  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:48:29.204504  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:50:19.821648  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-628749 -v=7 --alsologtostderr: (4m35.701722433s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-628749 --wait=true -v=7 --alsologtostderr
E0731 19:51:42.869773  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 19:53:01.522659  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-628749 --wait=true -v=7 --alsologtostderr: (2m18.68239047s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-628749
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (414.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-628749 node delete m03 -v=7 --alsologtostderr: (6.846681248s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (274.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 stop -v=7 --alsologtostderr
E0731 19:55:19.821306  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-628749 stop -v=7 --alsologtostderr: (4m34.294678601s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-628749 status -v=7 --alsologtostderr: exit status 7 (117.437178ms)

                                                
                                                
-- stdout --
	ha-628749
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-628749-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-628749-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:57:50.153700  642295 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:57:50.153829  642295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:57:50.153839  642295 out.go:304] Setting ErrFile to fd 2...
	I0731 19:57:50.153843  642295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:57:50.154027  642295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
	I0731 19:57:50.154213  642295 out.go:298] Setting JSON to false
	I0731 19:57:50.154236  642295 mustload.go:65] Loading cluster: ha-628749
	I0731 19:57:50.154362  642295 notify.go:220] Checking for updates...
	I0731 19:57:50.155282  642295 config.go:182] Loaded profile config "ha-628749": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0731 19:57:50.155371  642295 status.go:255] checking status of ha-628749 ...
	I0731 19:57:50.156531  642295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:57:50.156604  642295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:57:50.171505  642295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46095
	I0731 19:57:50.171945  642295 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:57:50.172472  642295 main.go:141] libmachine: Using API Version  1
	I0731 19:57:50.172493  642295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:57:50.172843  642295 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:57:50.173090  642295 main.go:141] libmachine: (ha-628749) Calling .GetState
	I0731 19:57:50.189508  642295 status.go:330] ha-628749 host status = "Stopped" (err=<nil>)
	I0731 19:57:50.189529  642295 status.go:343] host is not running, skipping remaining checks
	I0731 19:57:50.189536  642295 status.go:257] ha-628749 status: &{Name:ha-628749 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:57:50.189580  642295 status.go:255] checking status of ha-628749-m02 ...
	I0731 19:57:50.189873  642295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:57:50.189921  642295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:57:50.204860  642295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38219
	I0731 19:57:50.205301  642295 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:57:50.205763  642295 main.go:141] libmachine: Using API Version  1
	I0731 19:57:50.205783  642295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:57:50.206107  642295 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:57:50.206293  642295 main.go:141] libmachine: (ha-628749-m02) Calling .GetState
	I0731 19:57:50.207755  642295 status.go:330] ha-628749-m02 host status = "Stopped" (err=<nil>)
	I0731 19:57:50.207770  642295 status.go:343] host is not running, skipping remaining checks
	I0731 19:57:50.207778  642295 status.go:257] ha-628749-m02 status: &{Name:ha-628749-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:57:50.207801  642295 status.go:255] checking status of ha-628749-m04 ...
	I0731 19:57:50.208115  642295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 19:57:50.208153  642295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:57:50.222594  642295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37111
	I0731 19:57:50.222962  642295 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:57:50.223461  642295 main.go:141] libmachine: Using API Version  1
	I0731 19:57:50.223482  642295 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:57:50.223798  642295 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:57:50.223969  642295 main.go:141] libmachine: (ha-628749-m04) Calling .GetState
	I0731 19:57:50.225268  642295 status.go:330] ha-628749-m04 host status = "Stopped" (err=<nil>)
	I0731 19:57:50.225281  642295 status.go:343] host is not running, skipping remaining checks
	I0731 19:57:50.225287  642295 status.go:257] ha-628749-m04 status: &{Name:ha-628749-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (274.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (118.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-628749 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0731 19:58:01.522514  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 19:59:24.564798  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-628749 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m57.440575973s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (118.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-628749 --control-plane -v=7 --alsologtostderr
E0731 20:00:19.823334  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-628749 --control-plane -v=7 --alsologtostderr: (1m13.606975554s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-628749 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (93.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-714811 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-714811 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m33.592778811s)
--- PASS: TestJSONOutput/start/Command (93.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-714811 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-714811 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-714811 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-714811 --output=json --user=testUser: (6.424026855s)
--- PASS: TestJSONOutput/stop/Command (6.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-085073 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-085073 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.803871ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"55f558d0-d2f8-4888-a232-d3c65315044a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-085073] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"25175cfb-35a3-4280-b266-502044ea9f5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19355"}}
	{"specversion":"1.0","id":"e213d86c-3758-47a0-b643-b194f02c93f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6742a98e-0b0c-4565-8c0e-c43b9778df2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig"}}
	{"specversion":"1.0","id":"5aadff8c-ed68-4ebf-9022-2c35ec8a876e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube"}}
	{"specversion":"1.0","id":"8bf1052c-c808-4e5d-ab43-abb7bb755708","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4d15445b-179b-44d6-9dd2-3e77d8d6c185","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a7cc1742-b2f7-4928-8b04-d877fe647c23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-085073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-085073
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (90.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-094448 --driver=kvm2  --container-runtime=containerd
E0731 20:03:01.522416  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-094448 --driver=kvm2  --container-runtime=containerd: (42.835601084s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-097718 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-097718 --driver=kvm2  --container-runtime=containerd: (45.268993822s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-094448
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-097718
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-097718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-097718
helpers_test.go:175: Cleaning up "first-094448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-094448
--- PASS: TestMinikubeProfile (90.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-841089 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-841089 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.06319182s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-841089 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-841089 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-859215 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-859215 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (23.426366925s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-859215 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-859215 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-841089 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-859215 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-859215 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-859215
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-859215: (1.273424159s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-859215
E0731 20:05:19.821502  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-859215: (23.318503967s)
--- PASS: TestMountStart/serial/RestartStopped (24.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-859215 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-859215 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (124.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-055285 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-055285 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m4.240051191s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (124.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-055285 -- rollout status deployment/busybox: (3.661886405s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- exec busybox-fc5497c4f-gvwhs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- exec busybox-fc5497c4f-lnrgc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- exec busybox-fc5497c4f-gvwhs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- exec busybox-fc5497c4f-lnrgc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- exec busybox-fc5497c4f-gvwhs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- exec busybox-fc5497c4f-lnrgc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.11s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- exec busybox-fc5497c4f-gvwhs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- exec busybox-fc5497c4f-gvwhs -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- exec busybox-fc5497c4f-lnrgc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-055285 -- exec busybox-fc5497c4f-lnrgc -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-055285 -v 3 --alsologtostderr
E0731 20:08:01.522897  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 20:08:22.870060  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-055285 -v 3 --alsologtostderr: (47.126628452s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-055285 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp testdata/cp-test.txt multinode-055285:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp multinode-055285:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile194074591/001/cp-test_multinode-055285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp multinode-055285:/home/docker/cp-test.txt multinode-055285-m02:/home/docker/cp-test_multinode-055285_multinode-055285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m02 "sudo cat /home/docker/cp-test_multinode-055285_multinode-055285-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp multinode-055285:/home/docker/cp-test.txt multinode-055285-m03:/home/docker/cp-test_multinode-055285_multinode-055285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m03 "sudo cat /home/docker/cp-test_multinode-055285_multinode-055285-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp testdata/cp-test.txt multinode-055285-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp multinode-055285-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile194074591/001/cp-test_multinode-055285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp multinode-055285-m02:/home/docker/cp-test.txt multinode-055285:/home/docker/cp-test_multinode-055285-m02_multinode-055285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285 "sudo cat /home/docker/cp-test_multinode-055285-m02_multinode-055285.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp multinode-055285-m02:/home/docker/cp-test.txt multinode-055285-m03:/home/docker/cp-test_multinode-055285-m02_multinode-055285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m03 "sudo cat /home/docker/cp-test_multinode-055285-m02_multinode-055285-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp testdata/cp-test.txt multinode-055285-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp multinode-055285-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile194074591/001/cp-test_multinode-055285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp multinode-055285-m03:/home/docker/cp-test.txt multinode-055285:/home/docker/cp-test_multinode-055285-m03_multinode-055285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285 "sudo cat /home/docker/cp-test_multinode-055285-m03_multinode-055285.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 cp multinode-055285-m03:/home/docker/cp-test.txt multinode-055285-m02:/home/docker/cp-test_multinode-055285-m03_multinode-055285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 ssh -n multinode-055285-m02 "sudo cat /home/docker/cp-test_multinode-055285-m03_multinode-055285-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-055285 node stop m03: (1.284071763s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-055285 status: exit status 7 (414.436851ms)

                                                
                                                
-- stdout --
	multinode-055285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-055285-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-055285-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-055285 status --alsologtostderr: exit status 7 (414.030361ms)

                                                
                                                
-- stdout --
	multinode-055285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-055285-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-055285-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:08:51.106824  649866 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:08:51.106953  649866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:08:51.106964  649866 out.go:304] Setting ErrFile to fd 2...
	I0731 20:08:51.106971  649866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:08:51.107181  649866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
	I0731 20:08:51.107389  649866 out.go:298] Setting JSON to false
	I0731 20:08:51.107424  649866 mustload.go:65] Loading cluster: multinode-055285
	I0731 20:08:51.107512  649866 notify.go:220] Checking for updates...
	I0731 20:08:51.107860  649866 config.go:182] Loaded profile config "multinode-055285": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0731 20:08:51.107880  649866 status.go:255] checking status of multinode-055285 ...
	I0731 20:08:51.108287  649866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 20:08:51.108370  649866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:08:51.125178  649866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40863
	I0731 20:08:51.125636  649866 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:08:51.126243  649866 main.go:141] libmachine: Using API Version  1
	I0731 20:08:51.126265  649866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:08:51.126686  649866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:08:51.126903  649866 main.go:141] libmachine: (multinode-055285) Calling .GetState
	I0731 20:08:51.128899  649866 status.go:330] multinode-055285 host status = "Running" (err=<nil>)
	I0731 20:08:51.128926  649866 host.go:66] Checking if "multinode-055285" exists ...
	I0731 20:08:51.129262  649866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 20:08:51.129317  649866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:08:51.144891  649866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37451
	I0731 20:08:51.145398  649866 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:08:51.145862  649866 main.go:141] libmachine: Using API Version  1
	I0731 20:08:51.145895  649866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:08:51.146204  649866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:08:51.146389  649866 main.go:141] libmachine: (multinode-055285) Calling .GetIP
	I0731 20:08:51.149226  649866 main.go:141] libmachine: (multinode-055285) DBG | domain multinode-055285 has defined MAC address 52:54:00:ba:0a:cd in network mk-multinode-055285
	I0731 20:08:51.149673  649866 main.go:141] libmachine: (multinode-055285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:0a:cd", ip: ""} in network mk-multinode-055285: {Iface:virbr1 ExpiryTime:2024-07-31 21:05:57 +0000 UTC Type:0 Mac:52:54:00:ba:0a:cd Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-055285 Clientid:01:52:54:00:ba:0a:cd}
	I0731 20:08:51.149724  649866 main.go:141] libmachine: (multinode-055285) DBG | domain multinode-055285 has defined IP address 192.168.39.249 and MAC address 52:54:00:ba:0a:cd in network mk-multinode-055285
	I0731 20:08:51.149785  649866 host.go:66] Checking if "multinode-055285" exists ...
	I0731 20:08:51.150083  649866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 20:08:51.150119  649866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:08:51.165844  649866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37381
	I0731 20:08:51.166262  649866 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:08:51.166731  649866 main.go:141] libmachine: Using API Version  1
	I0731 20:08:51.166756  649866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:08:51.167081  649866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:08:51.167369  649866 main.go:141] libmachine: (multinode-055285) Calling .DriverName
	I0731 20:08:51.167618  649866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:08:51.167646  649866 main.go:141] libmachine: (multinode-055285) Calling .GetSSHHostname
	I0731 20:08:51.170405  649866 main.go:141] libmachine: (multinode-055285) DBG | domain multinode-055285 has defined MAC address 52:54:00:ba:0a:cd in network mk-multinode-055285
	I0731 20:08:51.170805  649866 main.go:141] libmachine: (multinode-055285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ba:0a:cd", ip: ""} in network mk-multinode-055285: {Iface:virbr1 ExpiryTime:2024-07-31 21:05:57 +0000 UTC Type:0 Mac:52:54:00:ba:0a:cd Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:multinode-055285 Clientid:01:52:54:00:ba:0a:cd}
	I0731 20:08:51.170831  649866 main.go:141] libmachine: (multinode-055285) DBG | domain multinode-055285 has defined IP address 192.168.39.249 and MAC address 52:54:00:ba:0a:cd in network mk-multinode-055285
	I0731 20:08:51.170981  649866 main.go:141] libmachine: (multinode-055285) Calling .GetSSHPort
	I0731 20:08:51.171171  649866 main.go:141] libmachine: (multinode-055285) Calling .GetSSHKeyPath
	I0731 20:08:51.171351  649866 main.go:141] libmachine: (multinode-055285) Calling .GetSSHUsername
	I0731 20:08:51.171529  649866 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-616888/.minikube/machines/multinode-055285/id_rsa Username:docker}
	I0731 20:08:51.246925  649866 ssh_runner.go:195] Run: systemctl --version
	I0731 20:08:51.253097  649866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:08:51.268336  649866 kubeconfig.go:125] found "multinode-055285" server: "https://192.168.39.249:8443"
	I0731 20:08:51.268372  649866 api_server.go:166] Checking apiserver status ...
	I0731 20:08:51.268409  649866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 20:08:51.281182  649866 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1102/cgroup
	W0731 20:08:51.290135  649866 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1102/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 20:08:51.290181  649866 ssh_runner.go:195] Run: ls
	I0731 20:08:51.294253  649866 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I0731 20:08:51.298404  649866 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I0731 20:08:51.298427  649866 status.go:422] multinode-055285 apiserver status = Running (err=<nil>)
	I0731 20:08:51.298441  649866 status.go:257] multinode-055285 status: &{Name:multinode-055285 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:08:51.298471  649866 status.go:255] checking status of multinode-055285-m02 ...
	I0731 20:08:51.298870  649866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 20:08:51.298915  649866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:08:51.314506  649866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41165
	I0731 20:08:51.314927  649866 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:08:51.315536  649866 main.go:141] libmachine: Using API Version  1
	I0731 20:08:51.315562  649866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:08:51.315960  649866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:08:51.316206  649866 main.go:141] libmachine: (multinode-055285-m02) Calling .GetState
	I0731 20:08:51.317942  649866 status.go:330] multinode-055285-m02 host status = "Running" (err=<nil>)
	I0731 20:08:51.317959  649866 host.go:66] Checking if "multinode-055285-m02" exists ...
	I0731 20:08:51.318234  649866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 20:08:51.318267  649866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:08:51.333427  649866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37227
	I0731 20:08:51.333803  649866 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:08:51.334264  649866 main.go:141] libmachine: Using API Version  1
	I0731 20:08:51.334280  649866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:08:51.334614  649866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:08:51.334846  649866 main.go:141] libmachine: (multinode-055285-m02) Calling .GetIP
	I0731 20:08:51.337752  649866 main.go:141] libmachine: (multinode-055285-m02) DBG | domain multinode-055285-m02 has defined MAC address 52:54:00:49:8e:da in network mk-multinode-055285
	I0731 20:08:51.338181  649866 main.go:141] libmachine: (multinode-055285-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:8e:da", ip: ""} in network mk-multinode-055285: {Iface:virbr1 ExpiryTime:2024-07-31 21:07:11 +0000 UTC Type:0 Mac:52:54:00:49:8e:da Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:multinode-055285-m02 Clientid:01:52:54:00:49:8e:da}
	I0731 20:08:51.338208  649866 main.go:141] libmachine: (multinode-055285-m02) DBG | domain multinode-055285-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:49:8e:da in network mk-multinode-055285
	I0731 20:08:51.338343  649866 host.go:66] Checking if "multinode-055285-m02" exists ...
	I0731 20:08:51.338642  649866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 20:08:51.338675  649866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:08:51.354779  649866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I0731 20:08:51.355197  649866 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:08:51.355671  649866 main.go:141] libmachine: Using API Version  1
	I0731 20:08:51.355693  649866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:08:51.356004  649866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:08:51.356184  649866 main.go:141] libmachine: (multinode-055285-m02) Calling .DriverName
	I0731 20:08:51.356356  649866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 20:08:51.356381  649866 main.go:141] libmachine: (multinode-055285-m02) Calling .GetSSHHostname
	I0731 20:08:51.358869  649866 main.go:141] libmachine: (multinode-055285-m02) DBG | domain multinode-055285-m02 has defined MAC address 52:54:00:49:8e:da in network mk-multinode-055285
	I0731 20:08:51.359321  649866 main.go:141] libmachine: (multinode-055285-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:8e:da", ip: ""} in network mk-multinode-055285: {Iface:virbr1 ExpiryTime:2024-07-31 21:07:11 +0000 UTC Type:0 Mac:52:54:00:49:8e:da Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:multinode-055285-m02 Clientid:01:52:54:00:49:8e:da}
	I0731 20:08:51.359347  649866 main.go:141] libmachine: (multinode-055285-m02) DBG | domain multinode-055285-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:49:8e:da in network mk-multinode-055285
	I0731 20:08:51.359477  649866 main.go:141] libmachine: (multinode-055285-m02) Calling .GetSSHPort
	I0731 20:08:51.359630  649866 main.go:141] libmachine: (multinode-055285-m02) Calling .GetSSHKeyPath
	I0731 20:08:51.359800  649866 main.go:141] libmachine: (multinode-055285-m02) Calling .GetSSHUsername
	I0731 20:08:51.359944  649866 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19355-616888/.minikube/machines/multinode-055285-m02/id_rsa Username:docker}
	I0731 20:08:51.441760  649866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 20:08:51.454891  649866 status.go:257] multinode-055285-m02 status: &{Name:multinode-055285-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:08:51.454936  649866 status.go:255] checking status of multinode-055285-m03 ...
	I0731 20:08:51.455334  649866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 20:08:51.455388  649866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:08:51.472225  649866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39855
	I0731 20:08:51.472711  649866 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:08:51.473201  649866 main.go:141] libmachine: Using API Version  1
	I0731 20:08:51.473228  649866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:08:51.473561  649866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:08:51.473759  649866 main.go:141] libmachine: (multinode-055285-m03) Calling .GetState
	I0731 20:08:51.475352  649866 status.go:330] multinode-055285-m03 host status = "Stopped" (err=<nil>)
	I0731 20:08:51.475367  649866 status.go:343] host is not running, skipping remaining checks
	I0731 20:08:51.475373  649866 status.go:257] multinode-055285-m03 status: &{Name:multinode-055285-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (34.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-055285 node start m03 -v=7 --alsologtostderr: (34.097197605s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (34.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (315.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-055285
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-055285
E0731 20:10:19.825538  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-055285: (3m4.131208137s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-055285 --wait=true -v=8 --alsologtostderr
E0731 20:13:01.522507  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-055285 --wait=true -v=8 --alsologtostderr: (2m11.470264508s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-055285
--- PASS: TestMultiNode/serial/RestartKeepsNodes (315.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-055285 node delete m03: (1.742114446s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 stop
E0731 20:15:19.825376  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 20:16:04.565828  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-055285 stop: (3m2.93100937s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-055285 status: exit status 7 (85.498846ms)

                                                
                                                
-- stdout --
	multinode-055285
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-055285-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-055285 status --alsologtostderr: exit status 7 (85.429972ms)

                                                
                                                
-- stdout --
	multinode-055285
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-055285-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:17:47.205602  653048 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:17:47.205866  653048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:17:47.205876  653048 out.go:304] Setting ErrFile to fd 2...
	I0731 20:17:47.205880  653048 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:17:47.206128  653048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
	I0731 20:17:47.206347  653048 out.go:298] Setting JSON to false
	I0731 20:17:47.206373  653048 mustload.go:65] Loading cluster: multinode-055285
	I0731 20:17:47.206477  653048 notify.go:220] Checking for updates...
	I0731 20:17:47.206813  653048 config.go:182] Loaded profile config "multinode-055285": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0731 20:17:47.206834  653048 status.go:255] checking status of multinode-055285 ...
	I0731 20:17:47.207251  653048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 20:17:47.207343  653048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:17:47.222940  653048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
	I0731 20:17:47.223361  653048 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:17:47.223976  653048 main.go:141] libmachine: Using API Version  1
	I0731 20:17:47.224013  653048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:17:47.224373  653048 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:17:47.224598  653048 main.go:141] libmachine: (multinode-055285) Calling .GetState
	I0731 20:17:47.226588  653048 status.go:330] multinode-055285 host status = "Stopped" (err=<nil>)
	I0731 20:17:47.226605  653048 status.go:343] host is not running, skipping remaining checks
	I0731 20:17:47.226612  653048 status.go:257] multinode-055285 status: &{Name:multinode-055285 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 20:17:47.226665  653048 status.go:255] checking status of multinode-055285-m02 ...
	I0731 20:17:47.226961  653048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0731 20:17:47.227003  653048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 20:17:47.242008  653048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0731 20:17:47.242409  653048 main.go:141] libmachine: () Calling .GetVersion
	I0731 20:17:47.242999  653048 main.go:141] libmachine: Using API Version  1
	I0731 20:17:47.243025  653048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 20:17:47.243395  653048 main.go:141] libmachine: () Calling .GetMachineName
	I0731 20:17:47.243603  653048 main.go:141] libmachine: (multinode-055285-m02) Calling .GetState
	I0731 20:17:47.245193  653048 status.go:330] multinode-055285-m02 host status = "Stopped" (err=<nil>)
	I0731 20:17:47.245210  653048 status.go:343] host is not running, skipping remaining checks
	I0731 20:17:47.245217  653048 status.go:257] multinode-055285-m02 status: &{Name:multinode-055285-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-055285 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0731 20:18:01.522215  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-055285 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m29.399147696s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-055285 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-055285
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-055285-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-055285-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (63.429744ms)

                                                
                                                
-- stdout --
	* [multinode-055285-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-055285-m02' is duplicated with machine name 'multinode-055285-m02' in profile 'multinode-055285'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-055285-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-055285-m03 --driver=kvm2  --container-runtime=containerd: (40.544270252s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-055285
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-055285: exit status 80 (216.706761ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-055285 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-055285-m03 already exists in multinode-055285-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-055285-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.86s)

                                                
                                    
x
+
TestPreload (321.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-586856 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0731 20:20:19.821775  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-586856 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m49.711635797s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-586856 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-586856 image pull gcr.io/k8s-minikube/busybox: (2.266044202s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-586856
E0731 20:23:01.522774  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-586856: (1m31.435304495s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-586856 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0731 20:25:02.871062  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 20:25:19.822280  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-586856 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (56.888979908s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-586856 image list
helpers_test.go:175: Cleaning up "test-preload-586856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-586856
--- PASS: TestPreload (321.37s)

                                                
                                    
x
+
TestScheduledStopUnix (110.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-575773 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-575773 --memory=2048 --driver=kvm2  --container-runtime=containerd: (39.056370081s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575773 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-575773 -n scheduled-stop-575773
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575773 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575773 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575773 -n scheduled-stop-575773
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-575773
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-575773 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-575773
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-575773: exit status 7 (66.0012ms)

                                                
                                                
-- stdout --
	scheduled-stop-575773
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575773 -n scheduled-stop-575773
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-575773 -n scheduled-stop-575773: exit status 7 (63.741663ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-575773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-575773
--- PASS: TestScheduledStopUnix (110.59s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (142.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.674209146 start -p running-upgrade-039979 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.674209146 start -p running-upgrade-039979 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m23.394694787s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-039979 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-039979 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (55.643572716s)
helpers_test.go:175: Cleaning up "running-upgrade-039979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-039979
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-039979: (1.190044979s)
--- PASS: TestRunningBinaryUpgrade (142.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (189.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-155414 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-155414 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m27.058870527s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-155414
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-155414: (1.529435947s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-155414 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-155414 status --format={{.Host}}: exit status 7 (87.850767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-155414 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-155414 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (46.581170953s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-155414 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-155414 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-155414 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (81.613234ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-155414] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-155414
	    minikube start -p kubernetes-upgrade-155414 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1554142 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-155414 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-155414 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-155414 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (52.735546492s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-155414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-155414
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-155414: (1.104914242s)
--- PASS: TestKubernetesUpgrade (189.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-324619 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-324619 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (78.372073ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-324619] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (121.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-324619 --driver=kvm2  --container-runtime=containerd
E0731 20:28:01.522915  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-324619 --driver=kvm2  --container-runtime=containerd: (2m0.979252518s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-324619 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (121.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-418387 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-418387 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (109.934636ms)

                                                
                                                
-- stdout --
	* [false-418387] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19355
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 20:28:14.347065  658363 out.go:291] Setting OutFile to fd 1 ...
	I0731 20:28:14.347355  658363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:28:14.347366  658363 out.go:304] Setting ErrFile to fd 2...
	I0731 20:28:14.347370  658363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 20:28:14.347564  658363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19355-616888/.minikube/bin
	I0731 20:28:14.348124  658363 out.go:298] Setting JSON to false
	I0731 20:28:14.349180  658363 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":15038,"bootTime":1722442656,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 20:28:14.349247  658363 start.go:139] virtualization: kvm guest
	I0731 20:28:14.351355  658363 out.go:177] * [false-418387] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 20:28:14.352654  658363 out.go:177]   - MINIKUBE_LOCATION=19355
	I0731 20:28:14.352697  658363 notify.go:220] Checking for updates...
	I0731 20:28:14.355163  658363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 20:28:14.356404  658363 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19355-616888/kubeconfig
	I0731 20:28:14.357588  658363 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19355-616888/.minikube
	I0731 20:28:14.359005  658363 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 20:28:14.360401  658363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 20:28:14.362240  658363 config.go:182] Loaded profile config "NoKubernetes-324619": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0731 20:28:14.362382  658363 config.go:182] Loaded profile config "cert-expiration-485296": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0731 20:28:14.362507  658363 config.go:182] Loaded profile config "force-systemd-env-327694": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0731 20:28:14.362673  658363 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 20:28:14.402910  658363 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 20:28:14.404104  658363 start.go:297] selected driver: kvm2
	I0731 20:28:14.404119  658363 start.go:901] validating driver "kvm2" against <nil>
	I0731 20:28:14.404133  658363 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 20:28:14.406072  658363 out.go:177] 
	W0731 20:28:14.407277  658363 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0731 20:28:14.408441  658363 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-418387 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-418387" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-418387" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-418387

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-418387"

                                                
                                                
----------------------- debugLogs end: false-418387 [took: 2.839198089s] --------------------------------
helpers_test.go:175: Cleaning up "false-418387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-418387
--- PASS: TestNetworkPlugins/group/false (3.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-324619 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-324619 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (19.565807811s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-324619 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-324619 status -o json: exit status 2 (261.598963ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-324619","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-324619
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-324619: (1.279647601s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (38.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-324619 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-324619 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (38.568654803s)
--- PASS: TestNoKubernetes/serial/Start (38.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-324619 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-324619 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.000941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-324619
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-324619: (1.285124501s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (38.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-324619 --driver=kvm2  --container-runtime=containerd
E0731 20:30:19.821313  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-324619 --driver=kvm2  --container-runtime=containerd: (38.054679955s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (38.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-324619 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-324619 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.875262ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (151.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2694345622 start -p stopped-upgrade-498814 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2694345622 start -p stopped-upgrade-498814 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (52.591193695s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2694345622 -p stopped-upgrade-498814 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2694345622 -p stopped-upgrade-498814 stop: (1.3244978s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-498814 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-498814 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m37.484345135s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (151.40s)

                                                
                                    
x
+
TestPause/serial/Start (64.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-255136 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-255136 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m4.715561923s)
--- PASS: TestPause/serial/Start (64.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (122.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0731 20:32:44.566822  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m2.794246464s)
--- PASS: TestNetworkPlugins/group/auto/Start (122.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0731 20:33:01.522738  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m41.708150768s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.71s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (57.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-255136 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-255136 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (57.48165958s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (57.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-498814
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (95.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m35.107579567s)
--- PASS: TestNetworkPlugins/group/calico/Start (95.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-255136 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-255136 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-255136 --output=json --layout=cluster: exit status 2 (236.34044ms)

                                                
                                                
-- stdout --
	{"Name":"pause-255136","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-255136","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-255136 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.77s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-255136 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.77s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.02s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-255136 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-255136 --alsologtostderr -v=5: (1.017462242s)
--- PASS: TestPause/serial/DeletePaused (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m25.224781828s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-418387 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-418387 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2lfcn" [2dab6171-0ebd-4bd2-863c-8a76dc996b08] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2lfcn" [2dab6171-0ebd-4bd2-863c-8a76dc996b08] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.012054656s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vxf5r" [9b39308b-7e45-4157-be3a-5dc18fac218c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006862563s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-418387 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-418387 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-s895n" [7d9934a9-a272-497d-b6d9-fc47a878f667] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-s895n" [7d9934a9-a272-497d-b6d9-fc47a878f667] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.013064038s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-418387 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-418387 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (102.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m42.153883952s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (102.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (97.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m37.824188327s)
--- PASS: TestNetworkPlugins/group/flannel/Start (97.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8dqqz" [1a961feb-64c1-4e50-81f1-044dfae5b3bb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006760823s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-418387 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-418387 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xzklr" [45797b31-0c7d-490b-9864-7c31bdf5c85e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xzklr" [45797b31-0c7d-490b-9864-7c31bdf5c85e] Running
E0731 20:35:19.821437  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004118816s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-418387 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (108.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-418387 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m48.424269421s)
--- PASS: TestNetworkPlugins/group/bridge/Start (108.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-418387 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-418387 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-418387 replace --force -f testdata/netcat-deployment.yaml: (1.754381661s)
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-dz8dn" [0473f485-59e7-498a-b075-3fdaae94a651] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-dz8dn" [0473f485-59e7-498a-b075-3fdaae94a651] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004620488s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-418387 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (181.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-224036 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-224036 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m1.916976803s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (181.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-418387 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-418387 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fjsbl" [1d1b846e-f797-43bd-9fd1-6b5e3b63352b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fjsbl" [1d1b846e-f797-43bd-9fd1-6b5e3b63352b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004147986s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-249cj" [f20e6663-b2b5-4833-a7dd-1a650be71852] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004694835s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-418387 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-418387 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-418387 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gpdsm" [1ff4cad1-1420-4bff-8665-707c569506e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gpdsm" [1ff4cad1-1420-4bff-8665-707c569506e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.079478432s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-418387 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (102.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-966106 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-966106 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (1m42.16249343s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (102.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-365397 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-365397 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.3: (1m22.419763248s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-418387 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-418387 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-mmmbx" [b8ac6861-e6ee-4a60-89b9-2b25436c1f9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-mmmbx" [b8ac6861-e6ee-4a60-89b9-2b25436c1f9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00463057s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-418387 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-418387 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-956265 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.3
E0731 20:38:01.522130  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-956265 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.3: (1m3.935444058s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-365397 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7cf6a64a-fe43-4f1d-bb3d-fc1fb26b4369] Pending
helpers_test.go:344: "busybox" [7cf6a64a-fe43-4f1d-bb3d-fc1fb26b4369] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7cf6a64a-fe43-4f1d-bb3d-fc1fb26b4369] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00431685s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-365397 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-966106 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cc5dc3ba-03cc-418c-aeca-a1ef6a1f07c0] Pending
helpers_test.go:344: "busybox" [cc5dc3ba-03cc-418c-aeca-a1ef6a1f07c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cc5dc3ba-03cc-418c-aeca-a1ef6a1f07c0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004172115s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-966106 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-365397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-365397 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-365397 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-365397 --alsologtostderr -v=3: (1m31.622837223s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-966106 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-966106 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-966106 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-966106 --alsologtostderr -v=3: (1m31.560644107s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-956265 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e874a5f0-9250-4077-aa53-675683a4f2ad] Pending
helpers_test.go:344: "busybox" [e874a5f0-9250-4077-aa53-675683a4f2ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e874a5f0-9250-4077-aa53-675683a4f2ad] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004547524s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-956265 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-956265 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-956265 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-956265 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-956265 --alsologtostderr -v=3: (1m31.567752184s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-224036 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [86f3ad4b-c65b-45f2-bed1-1e3b94901fd6] Pending
helpers_test.go:344: "busybox" [86f3ad4b-c65b-45f2-bed1-1e3b94901fd6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [86f3ad4b-c65b-45f2-bed1-1e3b94901fd6] Running
E0731 20:39:29.251027  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:29.256296  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:29.266671  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:29.287030  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:29.327361  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:29.407750  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:29.568412  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:29.888866  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:30.529842  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:30.972291  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:39:30.977582  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:39:30.987901  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:39:31.008228  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:39:31.048544  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:39:31.129706  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:39:31.290176  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:39:31.611002  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:39:31.810317  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:32.251246  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004245247s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-224036 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-224036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0731 20:39:33.531777  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-224036 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-224036 --alsologtostderr -v=3
E0731 20:39:34.371362  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:36.092299  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:39:39.492300  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:41.212556  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:39:49.732925  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:39:51.452899  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:40:06.574287  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:06.579681  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:06.589932  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:06.610252  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:06.650625  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:06.730794  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:06.891203  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:07.212342  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:07.852647  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:09.133480  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:10.214116  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:40:11.693639  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:11.933904  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:40:16.814383  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:19.821756  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-224036 --alsologtostderr -v=3: (1m31.649792423s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-365397 -n embed-certs-365397
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-365397 -n embed-certs-365397: exit status 7 (70.687845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-365397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (292.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-365397 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.3
E0731 20:40:27.054788  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-365397 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.3: (4m51.892941809s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-365397 -n embed-certs-365397
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (292.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966106 -n no-preload-966106
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966106 -n no-preload-966106: exit status 7 (67.321169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-966106 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (318.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-966106 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-966106 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (5m18.666713271s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-966106 -n no-preload-966106
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (318.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-956265 -n default-k8s-diff-port-956265
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-956265 -n default-k8s-diff-port-956265: exit status 7 (85.53152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-956265 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-956265 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.3
E0731 20:40:47.535571  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:40:51.175214  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:40:52.894263  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:40:53.603467  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:40:53.608792  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:40:53.619064  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:40:53.639446  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:40:53.679788  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:40:53.760346  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:40:53.920753  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:40:54.240874  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:40:54.881466  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:40:56.161905  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:40:58.723143  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:41:03.844334  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-956265 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.3: (5m44.392671738s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-956265 -n default-k8s-diff-port-956265
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-224036 -n old-k8s-version-224036
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-224036 -n old-k8s-version-224036: exit status 7 (77.940235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-224036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (435.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-224036 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0731 20:41:14.085242  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:41:28.496606  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:41:34.565862  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:41:37.866536  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:37.871851  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:37.882141  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:37.902519  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:37.942860  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:38.023254  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:38.184063  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:38.504917  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:39.145626  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:40.426627  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:42.830590  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:42.835911  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:42.846206  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:42.866561  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:42.871823  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
E0731 20:41:42.907051  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:42.987250  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:42.987266  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:43.147641  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:43.468833  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:44.109938  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:45.390253  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:47.951336  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:48.109492  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:41:53.072083  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:41:58.350299  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:42:03.313013  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:42:13.095476  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:42:14.814644  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:42:15.526080  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:42:18.830668  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:42:23.793434  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:42:29.649618  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:29.654882  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:29.665221  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:29.685553  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:29.725887  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:29.806655  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:29.967820  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:30.288379  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:30.929274  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:32.210006  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:34.770417  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:39.891214  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:50.132280  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:42:50.416822  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:42:59.791638  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:43:01.522812  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/functional-406825/client.crt: no such file or directory
E0731 20:43:04.754516  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:43:10.612973  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:43:37.447023  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
E0731 20:43:51.574004  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
E0731 20:44:21.712152  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
E0731 20:44:26.675329  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/flannel-418387/client.crt: no such file or directory
E0731 20:44:29.251247  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:44:30.972006  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:44:56.936019  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/auto-418387/client.crt: no such file or directory
E0731 20:44:58.655172  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/kindnet-418387/client.crt: no such file or directory
E0731 20:45:06.574179  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
E0731 20:45:13.494279  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/bridge-418387/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-224036 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (7m15.210181686s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-224036 -n old-k8s-version-224036
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (435.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5skzq" [147db0b2-3b5b-481f-8fed-9a4907182871] Running
E0731 20:45:19.822165  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/addons-449571/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003895446s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5skzq" [147db0b2-3b5b-481f-8fed-9a4907182871] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006917355s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-365397 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-365397 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-365397 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-365397 -n embed-certs-365397
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-365397 -n embed-certs-365397: exit status 2 (249.565127ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-365397 -n embed-certs-365397
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-365397 -n embed-certs-365397: exit status 2 (244.954377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-365397 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-365397 -n embed-certs-365397
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-365397 -n embed-certs-365397
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-914517 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0731 20:45:34.257643  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/calico-418387/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-914517 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (46.533262487s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-wfp2q" [54c3ba73-de54-40dd-80eb-174f420e67bc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-wfp2q" [54c3ba73-de54-40dd-80eb-174f420e67bc] Running
E0731 20:45:53.603804  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.005995983s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-wfp2q" [54c3ba73-de54-40dd-80eb-174f420e67bc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005485588s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-966106 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-966106 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-966106 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-966106 -n no-preload-966106
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-966106 -n no-preload-966106: exit status 2 (239.580946ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-966106 -n no-preload-966106
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-966106 -n no-preload-966106: exit status 2 (242.284875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-966106 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-966106 -n no-preload-966106
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-966106 -n no-preload-966106
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-914517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-914517 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.057673107s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-914517 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-914517 --alsologtostderr -v=3: (2.331718311s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-914517 -n newest-cni-914517
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-914517 -n newest-cni-914517: exit status 7 (66.158813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-914517 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-914517 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0731 20:46:21.288336  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/custom-flannel-418387/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-914517 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (32.174609112s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-914517 -n newest-cni-914517
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-sdhdj" [17f87dcb-56cd-46ef-b4fa-0eb1e81cc7e8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-sdhdj" [17f87dcb-56cd-46ef-b4fa-0eb1e81cc7e8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004813879s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-sdhdj" [17f87dcb-56cd-46ef-b4fa-0eb1e81cc7e8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003611375s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-956265 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-956265 image list --format=json
E0731 20:46:37.866814  624149 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19355-616888/.minikube/profiles/enable-default-cni-418387/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-956265 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-956265 -n default-k8s-diff-port-956265
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-956265 -n default-k8s-diff-port-956265: exit status 2 (236.90179ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-956265 -n default-k8s-diff-port-956265
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-956265 -n default-k8s-diff-port-956265: exit status 2 (244.639407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-956265 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-956265 -n default-k8s-diff-port-956265
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-956265 -n default-k8s-diff-port-956265
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-914517 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-914517 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-914517 -n newest-cni-914517
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-914517 -n newest-cni-914517: exit status 2 (240.429286ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-914517 -n newest-cni-914517
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-914517 -n newest-cni-914517: exit status 2 (229.509091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-914517 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-914517 -n newest-cni-914517
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-914517 -n newest-cni-914517
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-l9fx6" [db2504b9-fa80-4851-841f-bc589bc3d914] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003603461s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-l9fx6" [db2504b9-fa80-4851-841f-bc589bc3d914] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003989688s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-224036 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-224036 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-224036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-224036 -n old-k8s-version-224036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-224036 -n old-k8s-version-224036: exit status 2 (231.734534ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-224036 -n old-k8s-version-224036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-224036 -n old-k8s-version-224036: exit status 2 (233.061377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-224036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-224036 -n old-k8s-version-224036
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-224036 -n old-k8s-version-224036
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.29s)

                                                
                                    

Test skip (39/334)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
134 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
264 TestNetworkPlugins/group/kubenet 3.26
272 TestNetworkPlugins/group/cilium 3.34
287 TestStartStop/group/disable-driver-mounts 0.14
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-418387 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-418387" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-418387" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-418387

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-418387"

                                                
                                                
----------------------- debugLogs end: kubenet-418387 [took: 3.10168572s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-418387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-418387
--- SKIP: TestNetworkPlugins/group/kubenet (3.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-418387 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-418387" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-418387

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-418387" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-418387"

                                                
                                                
----------------------- debugLogs end: cilium-418387 [took: 3.181494135s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-418387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-418387
--- SKIP: TestNetworkPlugins/group/cilium (3.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-771736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-771736
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard