Test Report: KVM_Linux_containerd 18779

                    
                      c20b56ce109690ce92fd9e26e987f9b16f237ff0:2024-05-01:34278
                    
                

Test fail (13/325)

x
+
TestFunctional/serial/InvalidService (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-167406 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-167406 apply -f testdata/invalidsvc.yaml: exit status 1 (80.041422ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.39.209:8441/openapi/v2?timeout=32s": dial tcp 192.168.39.209:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-167406 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-167406 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-167406 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-167406 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-167406 --alsologtostderr -v=1] stderr:
I0501 02:20:47.244258   29930 out.go:291] Setting OutFile to fd 1 ...
I0501 02:20:47.244415   29930 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:47.244424   29930 out.go:304] Setting ErrFile to fd 2...
I0501 02:20:47.244428   29930 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:47.244624   29930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
I0501 02:20:47.244850   29930 mustload.go:65] Loading cluster: functional-167406
I0501 02:20:47.245149   29930 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:47.245511   29930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:47.245549   29930 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:47.260847   29930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43747
I0501 02:20:47.261372   29930 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:47.261956   29930 main.go:141] libmachine: Using API Version  1
I0501 02:20:47.261970   29930 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:47.262405   29930 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:47.262638   29930 main.go:141] libmachine: (functional-167406) Calling .GetState
I0501 02:20:47.264219   29930 host.go:66] Checking if "functional-167406" exists ...
I0501 02:20:47.264522   29930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:47.264561   29930 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:47.279378   29930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44241
I0501 02:20:47.279741   29930 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:47.280123   29930 main.go:141] libmachine: Using API Version  1
I0501 02:20:47.280140   29930 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:47.280442   29930 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:47.280647   29930 main.go:141] libmachine: (functional-167406) Calling .DriverName
I0501 02:20:47.280812   29930 api_server.go:166] Checking apiserver status ...
I0501 02:20:47.280872   29930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0501 02:20:47.280891   29930 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
I0501 02:20:47.283540   29930 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:47.283914   29930 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
I0501 02:20:47.283952   29930 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:47.284031   29930 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
I0501 02:20:47.284219   29930 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
I0501 02:20:47.284345   29930 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
I0501 02:20:47.284479   29930 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
W0501 02:20:47.374580   29930 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0501 02:20:47.376934   29930 out.go:177] * The control-plane node functional-167406 apiserver is not running: (state=Stopped)
I0501 02:20:47.378640   29930 out.go:177]   To start a cluster, run: "minikube start -p functional-167406"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-167406 -n functional-167406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-167406 -n functional-167406: exit status 2 (251.335857ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 logs -n 25: (1.660023108s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service   | functional-167406 service list                                           | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	| service   | functional-167406 service list                                           | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | -o json                                                                  |                   |         |         |                     |                     |
	| service   | functional-167406 service                                                | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | --namespace=default --https                                              |                   |         |         |                     |                     |
	|           | --url hello-node                                                         |                   |         |         |                     |                     |
	| service   | functional-167406                                                        | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | service hello-node --url                                                 |                   |         |         |                     |                     |
	|           | --format={{.IP}}                                                         |                   |         |         |                     |                     |
	| service   | functional-167406 service                                                | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | hello-node --url                                                         |                   |         |         |                     |                     |
	| mount     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdany-port205709386/001:/mount-9p       |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh findmnt                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh findmnt                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh -- ls                                              | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh cat                                                | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | /mount-9p/test-1714530043905066418                                       |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh mount |                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | grep 9p; ls -la /mount-9p; cat                                           |                   |         |         |                     |                     |
	|           | /mount-9p/pod-dates                                                      |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh sudo                                               | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdspecific-port3491307736/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh findmnt                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh findmnt                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| start     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh -- ls                                              | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| start     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | --dry-run --alsologtostderr                                              |                   |         |         |                     |                     |
	|           | -v=1 --driver=kvm2                                                       |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                       | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | -p functional-167406                                                     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh sudo                                               | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh findmnt                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | -T /mount1                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:20:47
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:20:47.108515   29902 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:20:47.108738   29902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:20:47.108746   29902 out.go:304] Setting ErrFile to fd 2...
	I0501 02:20:47.108749   29902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:20:47.108928   29902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 02:20:47.109434   29902 out.go:298] Setting JSON to false
	I0501 02:20:47.110356   29902 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3789,"bootTime":1714526258,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:20:47.110449   29902 start.go:139] virtualization: kvm guest
	I0501 02:20:47.112460   29902 out.go:177] * [functional-167406] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:20:47.113759   29902 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:20:47.113783   29902 notify.go:220] Checking for updates...
	I0501 02:20:47.114988   29902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:20:47.116265   29902 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:20:47.117748   29902 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	I0501 02:20:47.119176   29902 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:20:47.120838   29902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:20:47.122715   29902 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:20:47.123148   29902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:47.123186   29902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:47.138008   29902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I0501 02:20:47.138368   29902 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:47.138892   29902 main.go:141] libmachine: Using API Version  1
	I0501 02:20:47.138915   29902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:47.139252   29902 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:47.139411   29902 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:47.139669   29902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:20:47.140000   29902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:47.140035   29902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:47.153963   29902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0501 02:20:47.154335   29902 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:47.154739   29902 main.go:141] libmachine: Using API Version  1
	I0501 02:20:47.154758   29902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:47.155018   29902 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:47.155295   29902 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:47.187947   29902 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 02:20:47.189302   29902 start.go:297] selected driver: kvm2
	I0501 02:20:47.189333   29902 start.go:901] validating driver "kvm2" against &{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:20:47.189474   29902 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:20:47.190836   29902 cni.go:84] Creating CNI manager for ""
	I0501 02:20:47.190860   29902 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:20:47.190940   29902 start.go:340] cluster config:
	{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:20:47.192616   29902 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ae6f4e38ab4f3       6e38f40d628db       39 seconds ago       Running             storage-provisioner       4                   d3f41e0f975da       storage-provisioner
	ef9868f7ee3c3       cbb01a7bd410d       54 seconds ago       Running             coredns                   2                   2132b99b3eb2c       coredns-7db6d8ff4d-xv8bs
	b8e78e9b1aa3a       6e38f40d628db       54 seconds ago       Exited              storage-provisioner       3                   d3f41e0f975da       storage-provisioner
	350765a60a825       c7aad43836fa5       57 seconds ago       Running             kube-controller-manager   2                   fec06a36743b8       kube-controller-manager-functional-167406
	a513f3286b775       259c8277fcbbc       About a minute ago   Running             kube-scheduler            2                   a3c933aaaf5a9       kube-scheduler-functional-167406
	3b377dde86d26       3861cfcd7c04c       About a minute ago   Running             etcd                      2                   bdca39c10acda       etcd-functional-167406
	6df6abb34b88d       a0bf559e280cf       About a minute ago   Running             kube-proxy                2                   13168bbfbe961       kube-proxy-xbtf9
	ebe11aa9f8804       c7aad43836fa5       About a minute ago   Exited              kube-controller-manager   1                   fec06a36743b8       kube-controller-manager-functional-167406
	939e53f1e1db0       259c8277fcbbc       2 minutes ago        Exited              kube-scheduler            1                   a3c933aaaf5a9       kube-scheduler-functional-167406
	a1f43ae8da4b3       3861cfcd7c04c       2 minutes ago        Exited              etcd                      1                   bdca39c10acda       etcd-functional-167406
	f0dc76865d087       a0bf559e280cf       2 minutes ago        Exited              kube-proxy                1                   13168bbfbe961       kube-proxy-xbtf9
	5652211ff7b29       cbb01a7bd410d       2 minutes ago        Exited              coredns                   1                   2132b99b3eb2c       coredns-7db6d8ff4d-xv8bs
	
	
	==> containerd <==
	May 01 02:20:29 functional-167406 containerd[3593]: time="2024-05-01T02:20:29.241904733Z" level=info msg="ImageUpdate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.351940393Z" level=info msg="RemoveImage \"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.355343732Z" level=info msg="ImageDelete event name:\"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.358514194Z" level=info msg="ImageDelete event name:\"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91\""
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.405660092Z" level=info msg="RemoveImage \"gcr.io/google-containers/addon-resizer:functional-167406\" returns successfully"
	May 01 02:20:32 functional-167406 containerd[3593]: time="2024-05-01T02:20:32.293045880Z" level=info msg="ImageCreate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:32 functional-167406 containerd[3593]: time="2024-05-01T02:20:32.300943395Z" level=info msg="ImageCreate event name:\"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:32 functional-167406 containerd[3593]: time="2024-05-01T02:20:32.301676364Z" level=info msg="ImageUpdate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.570645100Z" level=info msg="Kill container \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.780064388Z" level=info msg="shim disconnected" id=429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.782807926Z" level=warning msg="cleaning up after shim disconnected" id=429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.783846218Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.819429102Z" level=info msg="StopContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" returns successfully"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.821868417Z" level=info msg="StopPodSandbox for \"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.822023270Z" level=info msg="Container to stop \"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.822457643Z" level=info msg="Container to stop \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.830076878Z" level=info msg="RemoveContainer for \"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.839962622Z" level=info msg="RemoveContainer for \"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d\" returns successfully"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.878876102Z" level=info msg="shim disconnected" id=88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.878949793Z" level=warning msg="cleaning up after shim disconnected" id=88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.878962325Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.904351844Z" level=info msg="TearDown network for sandbox \"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871\" successfully"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.904408181Z" level=info msg="StopPodSandbox for \"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871\" returns successfully"
	May 01 02:20:41 functional-167406 containerd[3593]: time="2024-05-01T02:20:41.834529273Z" level=info msg="RemoveContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\""
	May 01 02:20:41 functional-167406 containerd[3593]: time="2024-05-01T02:20:41.840826931Z" level=info msg="RemoveContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" returns successfully"
	
	
	==> coredns [5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43474 - 46251 "HINFO IN 6093638740258044659.1554125567718258750. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008772047s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: unknown (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: unknown (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ef9868f7ee3c37e5d0905ec5f86a854f4d72fd6fa06197f96f693fcc6e53a485] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51551 - 12396 "HINFO IN 7161565364375486857.4859467522399385342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006762819s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.076735] systemd-fstab-generator[2180]: Ignoring "noauto" option for root device
	[  +0.169403] systemd-fstab-generator[2192]: Ignoring "noauto" option for root device
	[  +0.211042] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[  +0.165983] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +0.323845] systemd-fstab-generator[2247]: Ignoring "noauto" option for root device
	[  +2.137091] systemd-fstab-generator[2452]: Ignoring "noauto" option for root device
	[  +0.094208] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.831325] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.516674] kauditd_printk_skb: 14 callbacks suppressed
	[  +1.457832] systemd-fstab-generator[3047]: Ignoring "noauto" option for root device
	[May 1 02:19] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.754628] systemd-fstab-generator[3215]: Ignoring "noauto" option for root device
	[ +14.125843] systemd-fstab-generator[3518]: Ignoring "noauto" option for root device
	[  +0.076849] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.077827] systemd-fstab-generator[3530]: Ignoring "noauto" option for root device
	[  +0.188600] systemd-fstab-generator[3544]: Ignoring "noauto" option for root device
	[  +0.171319] systemd-fstab-generator[3556]: Ignoring "noauto" option for root device
	[  +0.356766] systemd-fstab-generator[3585]: Ignoring "noauto" option for root device
	[  +1.365998] systemd-fstab-generator[3741]: Ignoring "noauto" option for root device
	[ +10.881538] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.346698] kauditd_printk_skb: 17 callbacks suppressed
	[  +1.027943] systemd-fstab-generator[4273]: Ignoring "noauto" option for root device
	[  +4.180252] kauditd_printk_skb: 36 callbacks suppressed
	[May 1 02:20] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	[ +34.495701] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [3b377dde86d267c8742b885c6b59382115c63d70d37c1823e0e1d10f97eff8b3] <==
	{"level":"info","ts":"2024-05-01T02:19:44.776714Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:19:44.77674Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:19:44.777129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b switched to configuration voters=(8441320971333687067)"}
	{"level":"info","ts":"2024-05-01T02:19:44.777351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","added-peer-id":"752598b30b66571b","added-peer-peer-urls":["https://192.168.39.209:2380"]}
	{"level":"info","ts":"2024-05-01T02:19:44.777547Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:19:44.777589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:19:44.781098Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T02:19:44.781692Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"752598b30b66571b","initial-advertise-peer-urls":["https://192.168.39.209:2380"],"listen-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.209:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T02:19:44.781836Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T02:19:44.782391Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.782447Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:46.149524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.152677Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:functional-167406 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T02:19:46.152701Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:19:46.152914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:19:46.153408Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T02:19:46.153471Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T02:19:46.155829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-05-01T02:19:46.156978Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339] <==
	{"level":"info","ts":"2024-05-01T02:18:47.383086Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:18:48.759417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.767118Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:18:48.767067Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:functional-167406 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T02:18:48.768075Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:18:48.768693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T02:18:48.768883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T02:18:48.769381Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-05-01T02:18:48.770832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T02:19:44.172843Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-01T02:19:44.172953Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-167406","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	{"level":"warn","ts":"2024-05-01T02:19:44.173117Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.17315Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.175169Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.175192Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-01T02:19:44.175362Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"752598b30b66571b","current-leader-member-id":"752598b30b66571b"}
	{"level":"info","ts":"2024-05-01T02:19:44.178843Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.179043Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.179065Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-167406","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	
	
	==> kernel <==
	 02:20:48 up 3 min,  0 users,  load average: 0.72, 0.48, 0.20
	Linux functional-167406 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-controller-manager [350765a60a82586dd2a69686a601b5d16ad68d05a64cd6e4d3359df1866500b5] <==
	I0501 02:20:05.561099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.912µs"
	I0501 02:20:05.565885       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 02:20:05.569741       1 shared_informer.go:320] Caches are synced for service account
	I0501 02:20:05.578368       1 shared_informer.go:320] Caches are synced for HPA
	I0501 02:20:05.580839       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 02:20:05.583366       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 02:20:05.584712       1 shared_informer.go:320] Caches are synced for GC
	I0501 02:20:05.590141       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 02:20:05.596584       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 02:20:05.600223       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 02:20:05.602715       1 shared_informer.go:320] Caches are synced for job
	I0501 02:20:05.605865       1 shared_informer.go:320] Caches are synced for deployment
	I0501 02:20:05.608288       1 shared_informer.go:320] Caches are synced for disruption
	I0501 02:20:05.634366       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 02:20:05.663770       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 02:20:05.752163       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:20:05.763685       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:20:06.213812       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:20:06.228479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:20:06.228527       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	E0501 02:20:35.765362       1 resource_quota_controller.go:440] failed to discover resources: Get "https://192.168.39.209:8441/api": dial tcp 192.168.39.209:8441: connect: connection refused
	I0501 02:20:36.215716       1 garbagecollector.go:828] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.168.39.209:8441/api\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:20:45.539465       1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.39.209:8441/api/v1/nodes/functional-167406/status\": dial tcp 192.168.39.209:8441: connect: connection refused" logger="node-lifecycle-controller" node="functional-167406"
	E0501 02:20:45.539784       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="functional-167406"
	E0501 02:20:45.539857       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.209:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused" logger="node-lifecycle-controller" node=""
	
	
	==> kube-controller-manager [ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54] <==
	I0501 02:19:13.936373       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 02:19:13.936390       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 02:19:13.940386       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 02:19:13.942716       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 02:19:13.946741       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 02:19:13.949349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.775495ms"
	I0501 02:19:13.950927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.553µs"
	I0501 02:19:13.969177       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 02:19:13.975817       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 02:19:13.985573       1 shared_informer.go:320] Caches are synced for TTL
	I0501 02:19:13.986878       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 02:19:13.991538       1 shared_informer.go:320] Caches are synced for node
	I0501 02:19:13.991869       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 02:19:13.992064       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 02:19:13.992201       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 02:19:13.992333       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 02:19:14.022008       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 02:19:14.035151       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 02:19:14.043403       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:19:14.068572       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:19:14.086442       1 shared_informer.go:320] Caches are synced for disruption
	I0501 02:19:14.135817       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 02:19:14.567440       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:19:14.602838       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:19:14.602885       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [6df6abb34b88dfeaae1f93d6a23cfc1748633884bc829df09c3047477d7f424c] <==
	I0501 02:19:44.730099       1 server_linux.go:69] "Using iptables proxy"
	E0501 02:19:44.732063       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:19:45.813700       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:19:47.982154       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	I0501 02:19:53.031359       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.209"]
	I0501 02:19:53.089991       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:19:53.090036       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:19:53.090052       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:19:53.094508       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:19:53.095319       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:19:53.095716       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:19:53.097123       1 config.go:192] "Starting service config controller"
	I0501 02:19:53.097468       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:19:53.097670       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:19:53.097907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:19:53.098658       1 config.go:319] "Starting node config controller"
	I0501 02:19:53.101299       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:19:53.198633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:19:53.198675       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:19:53.201407       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f] <==
	I0501 02:18:49.135475       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0501 02:18:49.135542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.135602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.135935       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.209:8441: connect: connection refused"
	W0501 02:18:49.960987       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.961201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:50.247414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:50.247829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:50.353906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:50.354334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.351893       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.352039       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.513544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.513603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.774168       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.774360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:55.789131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:55.789541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.962943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.962985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:58.352087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:58.352161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	I0501 02:19:06.033778       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:19:07.236470       1 shared_informer.go:320] Caches are synced for node config
	I0501 02:19:08.934441       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3] <==
	E0501 02:18:57.123850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.195323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.195395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.309765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.309834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.470763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.470798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.772512       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.772548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.804749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.804779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.886920       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.886982       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.929219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.929386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.978490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.978527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:58.311728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:58.311770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:00.939844       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:19:00.939973       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:19:01.688744       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0501 02:19:09.088531       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 02:19:12.088779       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	E0501 02:19:44.107636       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a513f3286b775a1c5c742fd0ac19b8fa8a6ee5129122ad75de1496bed6278d1f] <==
	W0501 02:19:49.143896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.143978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.351289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.351443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.596848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.209:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.596882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.209:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.654875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.654916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.674532       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.674621       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.791451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.791485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.859678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.859751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.074783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.074851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.174913       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.209:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.174963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.209:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.183651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.183678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.386329       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.386369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:52.969018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 02:19:52.970815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 02:19:54.216441       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 02:20:33 functional-167406 kubelet[4280]: E0501 02:20:33.994944    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:33 functional-167406 kubelet[4280]: E0501 02:20:33.995532    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:33 functional-167406 kubelet[4280]: E0501 02:20:33.995631    4280 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 01 02:20:39 functional-167406 kubelet[4280]: E0501 02:20:39.847743    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="7s"
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.827691    4280 scope.go:117] "RemoveContainer" containerID="52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d"
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.909221    4280 status_manager.go:853] "Failed to get status for pod" podUID="f9f7ede5128b64464fffeeb6b7a159f5" pod="kube-system/kube-apiserver-functional-167406" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951200    4280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-ca-certs\") pod \"f9f7ede5128b64464fffeeb6b7a159f5\" (UID: \"f9f7ede5128b64464fffeeb6b7a159f5\") "
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951330    4280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-k8s-certs\") pod \"f9f7ede5128b64464fffeeb6b7a159f5\" (UID: \"f9f7ede5128b64464fffeeb6b7a159f5\") "
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951353    4280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-usr-share-ca-certificates\") pod \"f9f7ede5128b64464fffeeb6b7a159f5\" (UID: \"f9f7ede5128b64464fffeeb6b7a159f5\") "
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951436    4280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-usr-share-ca-certificates" (OuterVolumeSpecName: "usr-share-ca-certificates") pod "f9f7ede5128b64464fffeeb6b7a159f5" (UID: "f9f7ede5128b64464fffeeb6b7a159f5"). InnerVolumeSpecName "usr-share-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951512    4280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-k8s-certs" (OuterVolumeSpecName: "k8s-certs") pod "f9f7ede5128b64464fffeeb6b7a159f5" (UID: "f9f7ede5128b64464fffeeb6b7a159f5"). InnerVolumeSpecName "k8s-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951550    4280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "f9f7ede5128b64464fffeeb6b7a159f5" (UID: "f9f7ede5128b64464fffeeb6b7a159f5"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.051954    4280 reconciler_common.go:289] "Volume detached for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-usr-share-ca-certificates\") on node \"functional-167406\" DevicePath \"\""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.052001    4280 reconciler_common.go:289] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-ca-certs\") on node \"functional-167406\" DevicePath \"\""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.052014    4280 reconciler_common.go:289] "Volume detached for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-k8s-certs\") on node \"functional-167406\" DevicePath \"\""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.832454    4280 scope.go:117] "RemoveContainer" containerID="429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5"
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.835971    4280 status_manager.go:853] "Failed to get status for pod" podUID="f9f7ede5128b64464fffeeb6b7a159f5" pod="kube-system/kube-apiserver-functional-167406" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:42 functional-167406 kubelet[4280]: I0501 02:20:42.602586    4280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f7ede5128b64464fffeeb6b7a159f5" path="/var/lib/kubelet/pods/f9f7ede5128b64464fffeeb6b7a159f5/volumes"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.057965    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.058888    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.059562    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.060306    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.060896    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.060989    4280 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 01 02:20:46 functional-167406 kubelet[4280]: E0501 02:20:46.849404    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="7s"
	
	
	==> storage-provisioner [ae6f4e38ab4f3bee5d7e47c976761288d60a681e7e951889c3578e892750495b] <==
	I0501 02:20:08.757073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 02:20:08.772588       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 02:20:08.772654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0501 02:20:12.228155       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:16.487066       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:20.083198       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:23.134350       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:26.154932       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:29.804693       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:31.962826       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:34.340046       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:36.574219       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:39.297992       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:42.535338       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:46.489612       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:49.005345       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965] <==
	I0501 02:19:54.061102       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0501 02:19:54.064135       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-167406 -n functional-167406
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-167406 -n functional-167406: exit status 2 (288.97459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-167406" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 status: exit status 2 (245.345715ms)

                                                
                                                
-- stdout --
	functional-167406
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-167406 status" : exit status 2
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (237.590733ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-167406 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 status -o json: exit status 2 (234.985138ms)

                                                
                                                
-- stdout --
	{"Name":"functional-167406","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-167406 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-167406 -n functional-167406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-167406 -n functional-167406: exit status 2 (249.018031ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 logs -n 25: (1.488022817s)
helpers_test.go:252: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-167406 ssh echo                                                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | hello                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh cat                                                       | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/hostname                                                                   |                   |         |         |                     |                     |
	| image   | functional-167406 image load --daemon                                           | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-167406                        |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                               |                   |         |         |                     |                     |
	| image   | functional-167406 image ls                                                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	| image   | functional-167406 image load --daemon                                           | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-167406                        |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                               |                   |         |         |                     |                     |
	| image   | functional-167406 image ls                                                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	| image   | functional-167406 image load --daemon                                           | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-167406                        |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                               |                   |         |         |                     |                     |
	| image   | functional-167406 image ls                                                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	| image   | functional-167406 image save                                                    | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-167406                        |                   |         |         |                     |                     |
	|         | /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                               |                   |         |         |                     |                     |
	| image   | functional-167406 image rm                                                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-167406                        |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                               |                   |         |         |                     |                     |
	| image   | functional-167406 image ls                                                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	| image   | functional-167406 image load                                                    | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                               |                   |         |         |                     |                     |
	| image   | functional-167406 image ls                                                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	| image   | functional-167406 image save --daemon                                           | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-167406                        |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                               |                   |         |         |                     |                     |
	| addons  | functional-167406 addons list                                                   | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	| addons  | functional-167406 addons list                                                   | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | -o json                                                                         |                   |         |         |                     |                     |
	| service | functional-167406 service list                                                  | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	| service | functional-167406 service list                                                  | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | -o json                                                                         |                   |         |         |                     |                     |
	| service | functional-167406 service                                                       | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | --namespace=default --https                                                     |                   |         |         |                     |                     |
	|         | --url hello-node                                                                |                   |         |         |                     |                     |
	| service | functional-167406                                                               | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | service hello-node --url                                                        |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                |                   |         |         |                     |                     |
	| service | functional-167406 service                                                       | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | hello-node --url                                                                |                   |         |         |                     |                     |
	| mount   | -p functional-167406                                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdany-port205709386/001:/mount-9p              |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                          |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh findmnt                                                   | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                          |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh findmnt                                                   | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | -T /mount-9p | grep 9p                                                          |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh -- ls                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | -la /mount-9p                                                                   |                   |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:19:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:19:29.437826   27302 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:19:29.438165   27302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:19:29.438221   27302 out.go:304] Setting ErrFile to fd 2...
	I0501 02:19:29.438230   27302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:19:29.438701   27302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 02:19:29.439585   27302 out.go:298] Setting JSON to false
	I0501 02:19:29.440532   27302 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3711,"bootTime":1714526258,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:19:29.440583   27302 start.go:139] virtualization: kvm guest
	I0501 02:19:29.442564   27302 out.go:177] * [functional-167406] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:19:29.444360   27302 notify.go:220] Checking for updates...
	I0501 02:19:29.444368   27302 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:19:29.445648   27302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:19:29.447273   27302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:19:29.448681   27302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	I0501 02:19:29.449982   27302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:19:29.451239   27302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:19:29.452846   27302 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:19:29.452913   27302 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:19:29.453282   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:19:29.453328   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:19:29.467860   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0501 02:19:29.468232   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:19:29.468835   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:19:29.468843   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:19:29.469189   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:19:29.469423   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:29.500693   27302 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 02:19:29.502118   27302 start.go:297] selected driver: kvm2
	I0501 02:19:29.502122   27302 start.go:901] validating driver "kvm2" against &{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:19:29.502238   27302 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:19:29.502533   27302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:19:29.502594   27302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13407/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:19:29.516334   27302 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:19:29.516947   27302 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:19:29.516997   27302 cni.go:84] Creating CNI manager for ""
	I0501 02:19:29.517005   27302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:19:29.517051   27302 start.go:340] cluster config:
	{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:19:29.517150   27302 iso.go:125] acquiring lock: {Name:mk2f0fca3713b9e2ec58748a6d2af30df1faa5ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:19:29.518781   27302 out.go:177] * Starting "functional-167406" primary control-plane node in "functional-167406" cluster
	I0501 02:19:29.519852   27302 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0501 02:19:29.519871   27302 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13407/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	I0501 02:19:29.519876   27302 cache.go:56] Caching tarball of preloaded images
	I0501 02:19:29.519929   27302 preload.go:173] Found /home/jenkins/minikube-integration/18779-13407/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:19:29.519935   27302 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on containerd
	I0501 02:19:29.520013   27302 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/config.json ...
	I0501 02:19:29.520168   27302 start.go:360] acquireMachinesLock for functional-167406: {Name:mkdc802449570b9ab245fcfdfa79580f6e5fb7ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:19:29.520199   27302 start.go:364] duration metric: took 21.879µs to acquireMachinesLock for "functional-167406"
	I0501 02:19:29.520208   27302 start.go:96] Skipping create...Using existing machine configuration
	I0501 02:19:29.520211   27302 fix.go:54] fixHost starting: 
	I0501 02:19:29.520447   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:19:29.520486   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:19:29.533583   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0501 02:19:29.533931   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:19:29.534437   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:19:29.534450   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:19:29.534783   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:19:29.534968   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:29.535081   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:19:29.536552   27302 fix.go:112] recreateIfNeeded on functional-167406: state=Running err=<nil>
	W0501 02:19:29.536561   27302 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 02:19:29.538271   27302 out.go:177] * Updating the running kvm2 "functional-167406" VM ...
	I0501 02:19:29.539520   27302 machine.go:94] provisionDockerMachine start ...
	I0501 02:19:29.539539   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:29.539733   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.541923   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.542256   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.542296   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.542428   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:29.542582   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.542731   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.542827   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:29.542960   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:29.543168   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:29.543175   27302 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:19:29.655744   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-167406
	
	I0501 02:19:29.655773   27302 main.go:141] libmachine: (functional-167406) Calling .GetMachineName
	I0501 02:19:29.655991   27302 buildroot.go:166] provisioning hostname "functional-167406"
	I0501 02:19:29.656006   27302 main.go:141] libmachine: (functional-167406) Calling .GetMachineName
	I0501 02:19:29.656190   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.658663   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.659033   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.659051   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.659173   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:29.659306   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.659396   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.659522   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:29.659654   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:29.659806   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:29.659812   27302 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-167406 && echo "functional-167406" | sudo tee /etc/hostname
	I0501 02:19:29.787678   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-167406
	
	I0501 02:19:29.787698   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.790278   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.790574   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.790592   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.790738   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:29.790915   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.791052   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.791179   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:29.791296   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:29.791539   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:29.791556   27302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-167406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-167406/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-167406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:19:29.904529   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:19:29.904545   27302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13407/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13407/.minikube}
	I0501 02:19:29.904577   27302 buildroot.go:174] setting up certificates
	I0501 02:19:29.904585   27302 provision.go:84] configureAuth start
	I0501 02:19:29.904595   27302 main.go:141] libmachine: (functional-167406) Calling .GetMachineName
	I0501 02:19:29.904823   27302 main.go:141] libmachine: (functional-167406) Calling .GetIP
	I0501 02:19:29.907376   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.907737   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.907764   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.907905   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.910052   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.910361   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.910376   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.910493   27302 provision.go:143] copyHostCerts
	I0501 02:19:29.910529   27302 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13407/.minikube/ca.pem, removing ...
	I0501 02:19:29.910534   27302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13407/.minikube/ca.pem
	I0501 02:19:29.910593   27302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13407/.minikube/ca.pem (1078 bytes)
	I0501 02:19:29.910685   27302 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13407/.minikube/cert.pem, removing ...
	I0501 02:19:29.910689   27302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13407/.minikube/cert.pem
	I0501 02:19:29.910711   27302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13407/.minikube/cert.pem (1123 bytes)
	I0501 02:19:29.910767   27302 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13407/.minikube/key.pem, removing ...
	I0501 02:19:29.910770   27302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13407/.minikube/key.pem
	I0501 02:19:29.910790   27302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13407/.minikube/key.pem (1675 bytes)
	I0501 02:19:29.910856   27302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13407/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca-key.pem org=jenkins.functional-167406 san=[127.0.0.1 192.168.39.209 functional-167406 localhost minikube]
	I0501 02:19:30.193847   27302 provision.go:177] copyRemoteCerts
	I0501 02:19:30.193886   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:19:30.193910   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.196409   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.196720   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.196739   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.196903   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.197084   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.197230   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.197366   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.287862   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:19:30.315195   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 02:19:30.343749   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:19:30.370719   27302 provision.go:87] duration metric: took 466.124066ms to configureAuth
	I0501 02:19:30.370742   27302 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:19:30.370956   27302 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:19:30.370964   27302 machine.go:97] duration metric: took 831.438029ms to provisionDockerMachine
	I0501 02:19:30.370973   27302 start.go:293] postStartSetup for "functional-167406" (driver="kvm2")
	I0501 02:19:30.370984   27302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:19:30.371006   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.371291   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:19:30.371313   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.373948   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.374280   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.374299   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.374374   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.374561   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.374711   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.374838   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.466722   27302 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:19:30.471556   27302 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:19:30.471568   27302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13407/.minikube/addons for local assets ...
	I0501 02:19:30.471626   27302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13407/.minikube/files for local assets ...
	I0501 02:19:30.471697   27302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem -> 207852.pem in /etc/ssl/certs
	I0501 02:19:30.471754   27302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/test/nested/copy/20785/hosts -> hosts in /etc/test/nested/copy/20785
	I0501 02:19:30.471794   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/20785
	I0501 02:19:30.483601   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem --> /etc/ssl/certs/207852.pem (1708 bytes)
	I0501 02:19:30.512365   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/test/nested/copy/20785/hosts --> /etc/test/nested/copy/20785/hosts (40 bytes)
	I0501 02:19:30.540651   27302 start.go:296] duration metric: took 169.667782ms for postStartSetup
	I0501 02:19:30.540676   27302 fix.go:56] duration metric: took 1.020464256s for fixHost
	I0501 02:19:30.540691   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.543228   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.543544   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.543565   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.543669   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.543818   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.543982   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.544097   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.544279   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:30.544432   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:30.544436   27302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:19:30.656481   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714529970.633363586
	
	I0501 02:19:30.656494   27302 fix.go:216] guest clock: 1714529970.633363586
	I0501 02:19:30.656502   27302 fix.go:229] Guest: 2024-05-01 02:19:30.633363586 +0000 UTC Remote: 2024-05-01 02:19:30.540678287 +0000 UTC m=+1.147555627 (delta=92.685299ms)
	I0501 02:19:30.656535   27302 fix.go:200] guest clock delta is within tolerance: 92.685299ms
	I0501 02:19:30.656541   27302 start.go:83] releasing machines lock for "functional-167406", held for 1.136336978s
	I0501 02:19:30.656561   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.656802   27302 main.go:141] libmachine: (functional-167406) Calling .GetIP
	I0501 02:19:30.659387   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.659782   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.659791   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.659960   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.660461   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.660625   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.660715   27302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:19:30.660744   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.660850   27302 ssh_runner.go:195] Run: cat /version.json
	I0501 02:19:30.660866   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.663221   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.663516   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.663551   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.663568   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.663661   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.663819   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.663959   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.663959   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.663982   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.664155   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.664231   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.664287   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.664383   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.664481   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.745127   27302 ssh_runner.go:195] Run: systemctl --version
	I0501 02:19:30.768517   27302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:19:30.774488   27302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:19:30.774528   27302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:19:30.785790   27302 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0501 02:19:30.785800   27302 start.go:494] detecting cgroup driver to use...
	I0501 02:19:30.785853   27302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:19:30.802226   27302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:19:30.816978   27302 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:19:30.817019   27302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:19:30.831597   27302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:19:30.845771   27302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:19:30.985885   27302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:19:31.138510   27302 docker.go:233] disabling docker service ...
	I0501 02:19:31.138553   27302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:19:31.160797   27302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:19:31.182214   27302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:19:31.342922   27302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:19:31.527687   27302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:19:31.546399   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:19:31.568500   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:19:31.580338   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:19:31.601655   27302 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:19:31.601733   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:19:31.615894   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:19:31.627888   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:19:31.639148   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:19:31.650308   27302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:19:31.661624   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:19:31.672388   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:19:31.684388   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:19:31.696664   27302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:19:31.706404   27302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:19:31.719548   27302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:19:31.869704   27302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:19:31.907722   27302 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0501 02:19:31.907783   27302 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0501 02:19:31.913070   27302 retry.go:31] will retry after 832.519029ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0501 02:19:32.746089   27302 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0501 02:19:32.751634   27302 start.go:562] Will wait 60s for crictl version
	I0501 02:19:32.751676   27302 ssh_runner.go:195] Run: which crictl
	I0501 02:19:32.756086   27302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:19:32.791299   27302 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.15
	RuntimeApiVersion:  v1
	I0501 02:19:32.791343   27302 ssh_runner.go:195] Run: containerd --version
	I0501 02:19:32.818691   27302 ssh_runner.go:195] Run: containerd --version
	I0501 02:19:32.851005   27302 out.go:177] * Preparing Kubernetes v1.30.0 on containerd 1.7.15 ...
	I0501 02:19:32.852228   27302 main.go:141] libmachine: (functional-167406) Calling .GetIP
	I0501 02:19:32.854728   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:32.855035   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:32.855053   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:32.855235   27302 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:19:32.861249   27302 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0501 02:19:32.862435   27302 kubeadm.go:877] updating cluster {Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:19:32.862527   27302 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0501 02:19:32.862574   27302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:19:32.903073   27302 containerd.go:627] all images are preloaded for containerd runtime.
	I0501 02:19:32.903097   27302 containerd.go:534] Images already preloaded, skipping extraction
	I0501 02:19:32.903148   27302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:19:32.943554   27302 containerd.go:627] all images are preloaded for containerd runtime.
	I0501 02:19:32.943565   27302 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:19:32.943572   27302 kubeadm.go:928] updating node { 192.168.39.209 8441 v1.30.0 containerd true true} ...
	I0501 02:19:32.943699   27302 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-167406 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:19:32.943758   27302 ssh_runner.go:195] Run: sudo crictl info
	I0501 02:19:32.986793   27302 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0501 02:19:32.986807   27302 cni.go:84] Creating CNI manager for ""
	I0501 02:19:32.986815   27302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:19:32.986822   27302 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:19:32.986839   27302 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.209 APIServerPort:8441 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-167406 NodeName:functional-167406 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubele
tConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:19:32.986939   27302 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.209
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-167406"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:19:32.986990   27302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:19:32.997857   27302 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:19:32.997921   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 02:19:33.010461   27302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0501 02:19:33.034391   27302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:19:33.056601   27302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2027 bytes)
	I0501 02:19:33.076127   27302 ssh_runner.go:195] Run: grep 192.168.39.209	control-plane.minikube.internal$ /etc/hosts
	I0501 02:19:33.080439   27302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:19:33.231190   27302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:19:33.249506   27302 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406 for IP: 192.168.39.209
	I0501 02:19:33.249520   27302 certs.go:194] generating shared ca certs ...
	I0501 02:19:33.249539   27302 certs.go:226] acquiring lock for ca certs: {Name:mk634f0288fd77df2d93a075894d5fc692d45f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:19:33.249720   27302 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13407/.minikube/ca.key
	I0501 02:19:33.249779   27302 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13407/.minikube/proxy-client-ca.key
	I0501 02:19:33.249786   27302 certs.go:256] generating profile certs ...
	I0501 02:19:33.249895   27302 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.key
	I0501 02:19:33.249952   27302 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/apiserver.key.2355bc77
	I0501 02:19:33.249982   27302 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/proxy-client.key
	I0501 02:19:33.250137   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/20785.pem (1338 bytes)
	W0501 02:19:33.250169   27302 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13407/.minikube/certs/20785_empty.pem, impossibly tiny 0 bytes
	I0501 02:19:33.250176   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:19:33.250203   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:19:33.250219   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:19:33.250238   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/key.pem (1675 bytes)
	I0501 02:19:33.250275   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem (1708 bytes)
	I0501 02:19:33.250965   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:19:33.278798   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0501 02:19:33.304380   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:19:33.331231   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:19:33.359390   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:19:33.387189   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:19:33.416881   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:19:33.444381   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:19:33.472023   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem --> /usr/share/ca-certificates/207852.pem (1708 bytes)
	I0501 02:19:33.498072   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:19:33.526348   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/certs/20785.pem --> /usr/share/ca-certificates/20785.pem (1338 bytes)
	I0501 02:19:33.554617   27302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:19:33.573705   27302 ssh_runner.go:195] Run: openssl version
	I0501 02:19:33.579942   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207852.pem && ln -fs /usr/share/ca-certificates/207852.pem /etc/ssl/certs/207852.pem"
	I0501 02:19:33.593246   27302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207852.pem
	I0501 02:19:33.598779   27302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:16 /usr/share/ca-certificates/207852.pem
	I0501 02:19:33.598808   27302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207852.pem
	I0501 02:19:33.605547   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207852.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:19:33.616599   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:19:33.629515   27302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:19:33.634618   27302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:19:33.634659   27302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:19:33.640675   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:19:33.651055   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20785.pem && ln -fs /usr/share/ca-certificates/20785.pem /etc/ssl/certs/20785.pem"
	I0501 02:19:33.664114   27302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20785.pem
	I0501 02:19:33.668922   27302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:16 /usr/share/ca-certificates/20785.pem
	I0501 02:19:33.668962   27302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20785.pem
	I0501 02:19:33.675578   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20785.pem /etc/ssl/certs/51391683.0"
	I0501 02:19:33.686506   27302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:19:33.691387   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 02:19:33.697494   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 02:19:33.703759   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 02:19:33.710510   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 02:19:33.716307   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 02:19:33.722636   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 02:19:33.728495   27302 kubeadm.go:391] StartCluster: {Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:19:33.728590   27302 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0501 02:19:33.728619   27302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:19:33.769657   27302 cri.go:89] found id: "52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d"
	I0501 02:19:33.769669   27302 cri.go:89] found id: "ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54"
	I0501 02:19:33.769673   27302 cri.go:89] found id: "939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3"
	I0501 02:19:33.769676   27302 cri.go:89] found id: "a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339"
	I0501 02:19:33.769679   27302 cri.go:89] found id: "f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f"
	I0501 02:19:33.769682   27302 cri.go:89] found id: "5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75"
	I0501 02:19:33.769685   27302 cri.go:89] found id: "c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2"
	I0501 02:19:33.769688   27302 cri.go:89] found id: "281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84"
	I0501 02:19:33.769690   27302 cri.go:89] found id: "5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6"
	I0501 02:19:33.769703   27302 cri.go:89] found id: "6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e"
	I0501 02:19:33.769706   27302 cri.go:89] found id: "fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0"
	I0501 02:19:33.769709   27302 cri.go:89] found id: "09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2"
	I0501 02:19:33.769712   27302 cri.go:89] found id: "1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a"
	I0501 02:19:33.769715   27302 cri.go:89] found id: "5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb"
	I0501 02:19:33.769721   27302 cri.go:89] found id: ""
	I0501 02:19:33.769764   27302 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0501 02:19:33.796543   27302 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","pid":1600,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a/rootfs","created":"2024-05-01T02:17:54.261537993Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xbtf9_049ec84e-c877-484d-b1b1-328156fb477d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-xbtf9","io.kubernetes.cri.sand
box-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"049ec84e-c877-484d-b1b1-328156fb477d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","pid":1692,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261/rootfs","created":"2024-05-01T02:17:54.490978244Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-7db6d8ff4d-xv8bs_ecdc231e-5cfc-4826-9956-e1270e6e9390","io.kubernetes.cri.sandbox-memory":"178257920"
,"io.kubernetes.cri.sandbox-name":"coredns-7db6d8ff4d-xv8bs","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ecdc231e-5cfc-4826-9956-e1270e6e9390"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d","pid":3131,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d/rootfs","created":"2024-05-01T02:18:58.937492084Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.30.0","io.kubernetes.cri.sandbox-id":"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-167406","io.kubernetes.cri.sandbox-namespac
e":"kube-system","io.kubernetes.cri.sandbox-uid":"f9f7ede5128b64464fffeeb6b7a159f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75","pid":2691,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75/rootfs","created":"2024-05-01T02:18:46.237845172Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.11.1","io.kubernetes.cri.sandbox-id":"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","io.kubernetes.cri.sandbox-name":"coredns-7db6d8ff4d-xv8bs","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ecdc231e-5cfc-4826-9956-e1270e6e9390"},"owner":"root"},{"ociVers
ion":"1.0.2-dev","id":"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","pid":1045,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871/rootfs","created":"2024-05-01T02:17:34.561663829Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-167406_f9f7ede5128b64464fffeeb6b7a159f5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kuber
netes.cri.sandbox-uid":"f9f7ede5128b64464fffeeb6b7a159f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3","pid":2847,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3/rootfs","created":"2024-05-01T02:18:47.246522059Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.30.0","io.kubernetes.cri.sandbox-id":"a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"81f155d75e2d0f03623586cc74d3e9ec"},"owner":"root"},{"ociVersion":"1.0.2-dev"
,"id":"a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339","pid":2854,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339/rootfs","created":"2024-05-01T02:18:47.241651104Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.12-0","io.kubernetes.cri.sandbox-id":"bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd","io.kubernetes.cri.sandbox-name":"etcd-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fb03fdcce11d87d827499069eedf6b25"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","pid":1053,"status":"running","bundle":"/run/containerd/
io.containerd.runtime.v2.task/k8s.io/a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5/rootfs","created":"2024-05-01T02:17:34.593330575Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-167406_81f155d75e2d0f03623586cc74d3e9ec","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"81f155d75e2d0f03623586cc74d3e9ec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdca39c10acda1333c53e0b90122acff31f3c7
81b1a1153e1efe95bb97bb53fd","pid":1046,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd/rootfs","created":"2024-05-01T02:17:34.564975441Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-167406_fb03fdcce11d87d827499069eedf6b25","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fb03fdcce11d87d827499069eedf6b25"},"owner":"root"},{"ociV
ersion":"1.0.2-dev","id":"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2","pid":2601,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2/rootfs","created":"2024-05-01T02:18:41.150171135Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4b8999c0-090e-491d-9b39-9b6e98af676a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","pid":19
18,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed/rootfs","created":"2024-05-01T02:17:55.32966664Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4b8999c0-090e-491d-9b39-9b6e98af676a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4b8999c0-090e-491d-9b39-9b6e98af676a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebe11aa9f88
04bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54","pid":3129,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54/rootfs","created":"2024-05-01T02:18:58.939294661Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.30.0","io.kubernetes.cri.sandbox-id":"fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"adcc40a72911f3d774df393212cbb315"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f","pid":2848,"status
":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f/rootfs","created":"2024-05-01T02:18:47.226546521Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.30.0","io.kubernetes.cri.sandbox-id":"13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","io.kubernetes.cri.sandbox-name":"kube-proxy-xbtf9","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"049ec84e-c877-484d-b1b1-328156fb477d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf","pid":1033,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fec06a36743b8d1ce78158fb3e875904d2672f3d46e78
b859736a76389034aaf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf/rootfs","created":"2024-05-01T02:17:34.531465578Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-167406_adcc40a72911f3d774df393212cbb315","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"adcc40a72911f3d774df393212cbb315"},"owner":"root"}]
	I0501 02:19:33.796872   27302 cri.go:126] list returned 14 containers
	I0501 02:19:33.796882   27302 cri.go:129] container: {ID:13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a Status:running}
	I0501 02:19:33.796896   27302 cri.go:131] skipping 13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a - not in ps
	I0501 02:19:33.796901   27302 cri.go:129] container: {ID:2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261 Status:running}
	I0501 02:19:33.796908   27302 cri.go:131] skipping 2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261 - not in ps
	I0501 02:19:33.796912   27302 cri.go:129] container: {ID:52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d Status:running}
	I0501 02:19:33.796920   27302 cri.go:135] skipping {52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d running}: state = "running", want "paused"
	I0501 02:19:33.796928   27302 cri.go:129] container: {ID:5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 Status:running}
	I0501 02:19:33.796935   27302 cri.go:135] skipping {5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 running}: state = "running", want "paused"
	I0501 02:19:33.796940   27302 cri.go:129] container: {ID:88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 Status:running}
	I0501 02:19:33.796948   27302 cri.go:131] skipping 88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 - not in ps
	I0501 02:19:33.796952   27302 cri.go:129] container: {ID:939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 Status:running}
	I0501 02:19:33.796959   27302 cri.go:135] skipping {939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 running}: state = "running", want "paused"
	I0501 02:19:33.796964   27302 cri.go:129] container: {ID:a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 Status:running}
	I0501 02:19:33.796968   27302 cri.go:135] skipping {a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 running}: state = "running", want "paused"
	I0501 02:19:33.796971   27302 cri.go:129] container: {ID:a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5 Status:running}
	I0501 02:19:33.796974   27302 cri.go:131] skipping a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5 - not in ps
	I0501 02:19:33.796976   27302 cri.go:129] container: {ID:bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd Status:running}
	I0501 02:19:33.796979   27302 cri.go:131] skipping bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd - not in ps
	I0501 02:19:33.796981   27302 cri.go:129] container: {ID:c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 Status:running}
	I0501 02:19:33.796985   27302 cri.go:135] skipping {c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 running}: state = "running", want "paused"
	I0501 02:19:33.796987   27302 cri.go:129] container: {ID:d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed Status:running}
	I0501 02:19:33.796991   27302 cri.go:131] skipping d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed - not in ps
	I0501 02:19:33.796993   27302 cri.go:129] container: {ID:ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 Status:running}
	I0501 02:19:33.796996   27302 cri.go:135] skipping {ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 running}: state = "running", want "paused"
	I0501 02:19:33.796999   27302 cri.go:129] container: {ID:f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f Status:running}
	I0501 02:19:33.797002   27302 cri.go:135] skipping {f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f running}: state = "running", want "paused"
	I0501 02:19:33.797010   27302 cri.go:129] container: {ID:fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf Status:running}
	I0501 02:19:33.797014   27302 cri.go:131] skipping fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf - not in ps
	I0501 02:19:33.797056   27302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 02:19:33.809208   27302 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 02:19:33.809215   27302 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 02:19:33.809218   27302 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 02:19:33.809251   27302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 02:19:33.820117   27302 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:19:33.820734   27302 kubeconfig.go:125] found "functional-167406" server: "https://192.168.39.209:8441"
	I0501 02:19:33.822281   27302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 02:19:33.833529   27302 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0501 02:19:33.833560   27302 kubeadm.go:1154] stopping kube-system containers ...
	I0501 02:19:33.833570   27302 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0501 02:19:33.833602   27302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:19:33.876099   27302 cri.go:89] found id: "52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d"
	I0501 02:19:33.876109   27302 cri.go:89] found id: "ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54"
	I0501 02:19:33.876112   27302 cri.go:89] found id: "939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3"
	I0501 02:19:33.876114   27302 cri.go:89] found id: "a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339"
	I0501 02:19:33.876121   27302 cri.go:89] found id: "f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f"
	I0501 02:19:33.876123   27302 cri.go:89] found id: "5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75"
	I0501 02:19:33.876125   27302 cri.go:89] found id: "c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2"
	I0501 02:19:33.876126   27302 cri.go:89] found id: "281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84"
	I0501 02:19:33.876128   27302 cri.go:89] found id: "5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6"
	I0501 02:19:33.876132   27302 cri.go:89] found id: "6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e"
	I0501 02:19:33.876133   27302 cri.go:89] found id: "fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0"
	I0501 02:19:33.876135   27302 cri.go:89] found id: "09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2"
	I0501 02:19:33.876137   27302 cri.go:89] found id: "1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a"
	I0501 02:19:33.876138   27302 cri.go:89] found id: "5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb"
	I0501 02:19:33.876143   27302 cri.go:89] found id: ""
	I0501 02:19:33.876147   27302 cri.go:234] Stopping containers: [52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb]
	I0501 02:19:33.876187   27302 ssh_runner.go:195] Run: which crictl
	I0501 02:19:33.880970   27302 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb
	I0501 02:19:49.400461   27302 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb: (15.5
19438939s)
	W0501 02:19:49.400521   27302 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e
2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb: Process exited with status 1
	stdout:
	52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d
	ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54
	939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3
	a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339
	f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f
	5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75
	
	stderr:
	E0501 02:19:49.373801    3825 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2\": not found" containerID="c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2"
	time="2024-05-01T02:19:49Z" level=fatal msg="stopping the container \"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2\": not found"
	I0501 02:19:49.400576   27302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 02:19:49.442489   27302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:19:49.453682   27302 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 May  1 02:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 May  1 02:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 May  1 02:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 May  1 02:18 /etc/kubernetes/scheduler.conf
	
	I0501 02:19:49.453722   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0501 02:19:49.463450   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0501 02:19:49.473268   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0501 02:19:49.482593   27302 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:19:49.482620   27302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:19:49.492406   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0501 02:19:49.501589   27302 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:19:49.501621   27302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:19:49.511385   27302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:19:49.521299   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:49.576852   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.275401   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.501617   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.586395   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.669276   27302 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:19:50.669347   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:19:51.169802   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:19:51.670333   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:19:51.687969   27302 api_server.go:72] duration metric: took 1.018693775s to wait for apiserver process to appear ...
	I0501 02:19:51.687984   27302 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:19:51.688003   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:52.986291   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 02:19:52.986313   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 02:19:52.986323   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:53.043640   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:53.043668   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:53.188909   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:53.193215   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:53.193230   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:53.688857   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:53.693916   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:53.693934   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:54.188678   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:54.205628   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:54.205654   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:54.688294   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:54.692103   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 200:
	ok
	I0501 02:19:54.698200   27302 api_server.go:141] control plane version: v1.30.0
	I0501 02:19:54.698212   27302 api_server.go:131] duration metric: took 3.010224858s to wait for apiserver health ...
	I0501 02:19:54.698218   27302 cni.go:84] Creating CNI manager for ""
	I0501 02:19:54.698223   27302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:19:54.699989   27302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 02:19:54.701380   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 02:19:54.716172   27302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 02:19:54.741211   27302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:19:54.757093   27302 system_pods.go:59] 7 kube-system pods found
	I0501 02:19:54.757117   27302 system_pods.go:61] "coredns-7db6d8ff4d-xv8bs" [ecdc231e-5cfc-4826-9956-e1270e6e9390] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 02:19:54.757122   27302 system_pods.go:61] "etcd-functional-167406" [c756611c-5955-4eb6-9e66-555a18726767] Running
	I0501 02:19:54.757130   27302 system_pods.go:61] "kube-apiserver-functional-167406" [4cd1e668-c6c5-42d0-8eff-11d1e7a37cb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 02:19:54.757141   27302 system_pods.go:61] "kube-controller-manager-functional-167406" [753f721a-d8f9-4aae-a8e5-42e47750f595] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 02:19:54.757148   27302 system_pods.go:61] "kube-proxy-xbtf9" [049ec84e-c877-484d-b1b1-328156fb477d] Running
	I0501 02:19:54.757156   27302 system_pods.go:61] "kube-scheduler-functional-167406" [d249cb29-5a87-45f6-90fa-4b962d7394b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 02:19:54.757162   27302 system_pods.go:61] "storage-provisioner" [4b8999c0-090e-491d-9b39-9b6e98af676a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 02:19:54.757168   27302 system_pods.go:74] duration metric: took 15.946257ms to wait for pod list to return data ...
	I0501 02:19:54.757176   27302 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:19:54.760302   27302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:19:54.760318   27302 node_conditions.go:123] node cpu capacity is 2
	I0501 02:19:54.760328   27302 node_conditions.go:105] duration metric: took 3.147862ms to run NodePressure ...
	I0501 02:19:54.760346   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:55.029033   27302 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 02:19:55.034633   27302 kubeadm.go:733] kubelet initialised
	I0501 02:19:55.034651   27302 kubeadm.go:734] duration metric: took 5.595558ms waiting for restarted kubelet to initialise ...
	I0501 02:19:55.034659   27302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:19:55.045035   27302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:19:57.051415   27302 pod_ready.go:102] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"False"
	I0501 02:19:59.054146   27302 pod_ready.go:102] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"False"
	I0501 02:20:01.552035   27302 pod_ready.go:102] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"False"
	I0501 02:20:03.052650   27302 pod_ready.go:92] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:03.052662   27302 pod_ready.go:81] duration metric: took 8.007609985s for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:03.052668   27302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:03.058012   27302 pod_ready.go:92] pod "etcd-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:03.058023   27302 pod_ready.go:81] duration metric: took 5.349333ms for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:03.058033   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:05.064872   27302 pod_ready.go:102] pod "kube-apiserver-functional-167406" in "kube-system" namespace has status "Ready":"False"
	I0501 02:20:05.565939   27302 pod_ready.go:92] pod "kube-apiserver-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:05.565953   27302 pod_ready.go:81] duration metric: took 2.507911806s for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:05.565964   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.072548   27302 pod_ready.go:92] pod "kube-controller-manager-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.072562   27302 pod_ready.go:81] duration metric: took 506.587642ms for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.072570   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.077468   27302 pod_ready.go:92] pod "kube-proxy-xbtf9" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.077475   27302 pod_ready.go:81] duration metric: took 4.901001ms for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.077482   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.082661   27302 pod_ready.go:92] pod "kube-scheduler-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.082667   27302 pod_ready.go:81] duration metric: took 5.180679ms for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.082673   27302 pod_ready.go:38] duration metric: took 11.048005881s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:20:06.082686   27302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:20:06.096020   27302 ops.go:34] apiserver oom_adj: -16
	I0501 02:20:06.096030   27302 kubeadm.go:591] duration metric: took 32.286806378s to restartPrimaryControlPlane
	I0501 02:20:06.096037   27302 kubeadm.go:393] duration metric: took 32.367551096s to StartCluster
	I0501 02:20:06.096053   27302 settings.go:142] acquiring lock: {Name:mk5412669f58875b6a0bd1d6a1dcb2e935592f4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:20:06.096132   27302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:20:06.096736   27302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13407/kubeconfig: {Name:mk4670d16c1b854bc97e144ac00ddd58ecc61c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:20:06.096929   27302 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0501 02:20:06.098607   27302 out.go:177] * Verifying Kubernetes components...
	I0501 02:20:06.097009   27302 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:20:06.098632   27302 addons.go:69] Setting storage-provisioner=true in profile "functional-167406"
	I0501 02:20:06.099827   27302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:20:06.099852   27302 addons.go:234] Setting addon storage-provisioner=true in "functional-167406"
	W0501 02:20:06.099860   27302 addons.go:243] addon storage-provisioner should already be in state true
	I0501 02:20:06.099881   27302 host.go:66] Checking if "functional-167406" exists ...
	I0501 02:20:06.097108   27302 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:20:06.098644   27302 addons.go:69] Setting default-storageclass=true in profile "functional-167406"
	I0501 02:20:06.099986   27302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-167406"
	I0501 02:20:06.100179   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.100220   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.100306   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.100341   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.114376   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44137
	I0501 02:20:06.114748   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.115211   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.115227   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.115351   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0501 02:20:06.115569   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.115713   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.115765   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:20:06.116239   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.116255   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.116544   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.117096   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.117132   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.118355   27302 addons.go:234] Setting addon default-storageclass=true in "functional-167406"
	W0501 02:20:06.118363   27302 addons.go:243] addon default-storageclass should already be in state true
	I0501 02:20:06.118386   27302 host.go:66] Checking if "functional-167406" exists ...
	I0501 02:20:06.118724   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.118757   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.132056   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0501 02:20:06.132367   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.132796   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.132824   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.133092   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.133652   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.133687   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.135199   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0501 02:20:06.135589   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.136121   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.136138   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.136403   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.136599   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:20:06.138120   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:06.140321   27302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:20:06.141799   27302 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:20:06.141809   27302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:20:06.141830   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:20:06.144487   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.144874   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:20:06.144901   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.145049   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:20:06.145233   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:20:06.145425   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:20:06.145550   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:20:06.148575   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0501 02:20:06.148910   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.149344   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.149353   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.149639   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.149825   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:20:06.151057   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:06.151309   27302 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:20:06.151318   27302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:20:06.151332   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:20:06.153814   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.154212   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:20:06.154230   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.154354   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:20:06.154522   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:20:06.154665   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:20:06.154784   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:20:06.291969   27302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:20:06.310462   27302 node_ready.go:35] waiting up to 6m0s for node "functional-167406" to be "Ready" ...
	I0501 02:20:06.314577   27302 node_ready.go:49] node "functional-167406" has status "Ready":"True"
	I0501 02:20:06.314587   27302 node_ready.go:38] duration metric: took 4.105122ms for node "functional-167406" to be "Ready" ...
	I0501 02:20:06.314595   27302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:20:06.320143   27302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.392851   27302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:20:06.403455   27302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:20:06.650181   27302 pod_ready.go:92] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.650195   27302 pod_ready.go:81] duration metric: took 330.040348ms for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.650206   27302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.049853   27302 pod_ready.go:92] pod "etcd-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:07.049864   27302 pod_ready.go:81] duration metric: took 399.652977ms for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.049873   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.068039   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068053   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.068102   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068112   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.068321   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.068325   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.068330   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.068335   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.068343   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068345   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068350   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.068352   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.069878   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.069888   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.069896   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.069905   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.070002   27302 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
	I0501 02:20:07.070009   27302 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
	I0501 02:20:07.079813   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.079823   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.080103   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.080112   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.082343   27302 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:20:07.083683   27302 addons.go:505] duration metric: took 986.687248ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:20:07.449877   27302 pod_ready.go:92] pod "kube-apiserver-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:07.449897   27302 pod_ready.go:81] duration metric: took 400.018258ms for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.449908   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.849418   27302 pod_ready.go:92] pod "kube-controller-manager-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:07.849429   27302 pod_ready.go:81] duration metric: took 399.514247ms for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.849437   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.249116   27302 pod_ready.go:92] pod "kube-proxy-xbtf9" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:08.249126   27302 pod_ready.go:81] duration metric: took 399.68419ms for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.249134   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.662879   27302 pod_ready.go:92] pod "kube-scheduler-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:08.662889   27302 pod_ready.go:81] duration metric: took 413.749499ms for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.662897   27302 pod_ready.go:38] duration metric: took 2.348293104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:20:08.662908   27302 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:20:08.662954   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:20:08.693543   27302 api_server.go:72] duration metric: took 2.596595813s to wait for apiserver process to appear ...
	I0501 02:20:08.693556   27302 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:20:08.693579   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:20:08.712207   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 200:
	ok
	I0501 02:20:08.713171   27302 api_server.go:141] control plane version: v1.30.0
	I0501 02:20:08.713188   27302 api_server.go:131] duration metric: took 19.62622ms to wait for apiserver health ...
	I0501 02:20:08.713196   27302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:20:08.853696   27302 system_pods.go:59] 7 kube-system pods found
	I0501 02:20:08.853712   27302 system_pods.go:61] "coredns-7db6d8ff4d-xv8bs" [ecdc231e-5cfc-4826-9956-e1270e6e9390] Running
	I0501 02:20:08.853718   27302 system_pods.go:61] "etcd-functional-167406" [c756611c-5955-4eb6-9e66-555a18726767] Running
	I0501 02:20:08.853722   27302 system_pods.go:61] "kube-apiserver-functional-167406" [4cd1e668-c6c5-42d0-8eff-11d1e7a37cb5] Running
	I0501 02:20:08.853726   27302 system_pods.go:61] "kube-controller-manager-functional-167406" [753f721a-d8f9-4aae-a8e5-42e47750f595] Running
	I0501 02:20:08.853730   27302 system_pods.go:61] "kube-proxy-xbtf9" [049ec84e-c877-484d-b1b1-328156fb477d] Running
	I0501 02:20:08.853732   27302 system_pods.go:61] "kube-scheduler-functional-167406" [d249cb29-5a87-45f6-90fa-4b962d7394b6] Running
	I0501 02:20:08.853736   27302 system_pods.go:61] "storage-provisioner" [4b8999c0-090e-491d-9b39-9b6e98af676a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 02:20:08.853743   27302 system_pods.go:74] duration metric: took 140.541233ms to wait for pod list to return data ...
	I0501 02:20:08.853752   27302 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:20:09.049668   27302 default_sa.go:45] found service account: "default"
	I0501 02:20:09.049681   27302 default_sa.go:55] duration metric: took 195.92317ms for default service account to be created ...
	I0501 02:20:09.049690   27302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:20:09.255439   27302 system_pods.go:86] 7 kube-system pods found
	I0501 02:20:09.255454   27302 system_pods.go:89] "coredns-7db6d8ff4d-xv8bs" [ecdc231e-5cfc-4826-9956-e1270e6e9390] Running
	I0501 02:20:09.255460   27302 system_pods.go:89] "etcd-functional-167406" [c756611c-5955-4eb6-9e66-555a18726767] Running
	I0501 02:20:09.255466   27302 system_pods.go:89] "kube-apiserver-functional-167406" [4cd1e668-c6c5-42d0-8eff-11d1e7a37cb5] Running
	I0501 02:20:09.255471   27302 system_pods.go:89] "kube-controller-manager-functional-167406" [753f721a-d8f9-4aae-a8e5-42e47750f595] Running
	I0501 02:20:09.255475   27302 system_pods.go:89] "kube-proxy-xbtf9" [049ec84e-c877-484d-b1b1-328156fb477d] Running
	I0501 02:20:09.255478   27302 system_pods.go:89] "kube-scheduler-functional-167406" [d249cb29-5a87-45f6-90fa-4b962d7394b6] Running
	I0501 02:20:09.255485   27302 system_pods.go:89] "storage-provisioner" [4b8999c0-090e-491d-9b39-9b6e98af676a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 02:20:09.255492   27302 system_pods.go:126] duration metric: took 205.797561ms to wait for k8s-apps to be running ...
	I0501 02:20:09.255501   27302 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:20:09.255557   27302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:20:09.275685   27302 system_svc.go:56] duration metric: took 20.175711ms WaitForService to wait for kubelet
	I0501 02:20:09.275704   27302 kubeadm.go:576] duration metric: took 3.178756744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:20:09.275720   27302 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:20:09.449853   27302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:20:09.449866   27302 node_conditions.go:123] node cpu capacity is 2
	I0501 02:20:09.449874   27302 node_conditions.go:105] duration metric: took 174.150822ms to run NodePressure ...
	I0501 02:20:09.449883   27302 start.go:240] waiting for startup goroutines ...
	I0501 02:20:09.449889   27302 start.go:245] waiting for cluster config update ...
	I0501 02:20:09.449897   27302 start.go:254] writing updated cluster config ...
	I0501 02:20:09.450124   27302 ssh_runner.go:195] Run: rm -f paused
	I0501 02:20:09.497259   27302 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:20:09.499251   27302 out.go:177] * Done! kubectl is now configured to use "functional-167406" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ae6f4e38ab4f3       6e38f40d628db       37 seconds ago       Running             storage-provisioner       4                   d3f41e0f975da       storage-provisioner
	ef9868f7ee3c3       cbb01a7bd410d       51 seconds ago       Running             coredns                   2                   2132b99b3eb2c       coredns-7db6d8ff4d-xv8bs
	b8e78e9b1aa3a       6e38f40d628db       51 seconds ago       Exited              storage-provisioner       3                   d3f41e0f975da       storage-provisioner
	350765a60a825       c7aad43836fa5       54 seconds ago       Running             kube-controller-manager   2                   fec06a36743b8       kube-controller-manager-functional-167406
	a513f3286b775       259c8277fcbbc       About a minute ago   Running             kube-scheduler            2                   a3c933aaaf5a9       kube-scheduler-functional-167406
	3b377dde86d26       3861cfcd7c04c       About a minute ago   Running             etcd                      2                   bdca39c10acda       etcd-functional-167406
	6df6abb34b88d       a0bf559e280cf       About a minute ago   Running             kube-proxy                2                   13168bbfbe961       kube-proxy-xbtf9
	ebe11aa9f8804       c7aad43836fa5       About a minute ago   Exited              kube-controller-manager   1                   fec06a36743b8       kube-controller-manager-functional-167406
	939e53f1e1db0       259c8277fcbbc       About a minute ago   Exited              kube-scheduler            1                   a3c933aaaf5a9       kube-scheduler-functional-167406
	a1f43ae8da4b3       3861cfcd7c04c       About a minute ago   Exited              etcd                      1                   bdca39c10acda       etcd-functional-167406
	f0dc76865d087       a0bf559e280cf       About a minute ago   Exited              kube-proxy                1                   13168bbfbe961       kube-proxy-xbtf9
	5652211ff7b29       cbb01a7bd410d       About a minute ago   Exited              coredns                   1                   2132b99b3eb2c       coredns-7db6d8ff4d-xv8bs
	
	
	==> containerd <==
	May 01 02:20:29 functional-167406 containerd[3593]: time="2024-05-01T02:20:29.241904733Z" level=info msg="ImageUpdate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.351940393Z" level=info msg="RemoveImage \"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.355343732Z" level=info msg="ImageDelete event name:\"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.358514194Z" level=info msg="ImageDelete event name:\"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91\""
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.405660092Z" level=info msg="RemoveImage \"gcr.io/google-containers/addon-resizer:functional-167406\" returns successfully"
	May 01 02:20:32 functional-167406 containerd[3593]: time="2024-05-01T02:20:32.293045880Z" level=info msg="ImageCreate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:32 functional-167406 containerd[3593]: time="2024-05-01T02:20:32.300943395Z" level=info msg="ImageCreate event name:\"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:32 functional-167406 containerd[3593]: time="2024-05-01T02:20:32.301676364Z" level=info msg="ImageUpdate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.570645100Z" level=info msg="Kill container \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.780064388Z" level=info msg="shim disconnected" id=429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.782807926Z" level=warning msg="cleaning up after shim disconnected" id=429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.783846218Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.819429102Z" level=info msg="StopContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" returns successfully"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.821868417Z" level=info msg="StopPodSandbox for \"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.822023270Z" level=info msg="Container to stop \"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.822457643Z" level=info msg="Container to stop \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.830076878Z" level=info msg="RemoveContainer for \"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.839962622Z" level=info msg="RemoveContainer for \"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d\" returns successfully"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.878876102Z" level=info msg="shim disconnected" id=88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.878949793Z" level=warning msg="cleaning up after shim disconnected" id=88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.878962325Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.904351844Z" level=info msg="TearDown network for sandbox \"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871\" successfully"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.904408181Z" level=info msg="StopPodSandbox for \"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871\" returns successfully"
	May 01 02:20:41 functional-167406 containerd[3593]: time="2024-05-01T02:20:41.834529273Z" level=info msg="RemoveContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\""
	May 01 02:20:41 functional-167406 containerd[3593]: time="2024-05-01T02:20:41.840826931Z" level=info msg="RemoveContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" returns successfully"
	
	
	==> coredns [5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43474 - 46251 "HINFO IN 6093638740258044659.1554125567718258750. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008772047s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: unknown (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: unknown (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ef9868f7ee3c37e5d0905ec5f86a854f4d72fd6fa06197f96f693fcc6e53a485] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51551 - 12396 "HINFO IN 7161565364375486857.4859467522399385342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006762819s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.076735] systemd-fstab-generator[2180]: Ignoring "noauto" option for root device
	[  +0.169403] systemd-fstab-generator[2192]: Ignoring "noauto" option for root device
	[  +0.211042] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[  +0.165983] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +0.323845] systemd-fstab-generator[2247]: Ignoring "noauto" option for root device
	[  +2.137091] systemd-fstab-generator[2452]: Ignoring "noauto" option for root device
	[  +0.094208] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.831325] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.516674] kauditd_printk_skb: 14 callbacks suppressed
	[  +1.457832] systemd-fstab-generator[3047]: Ignoring "noauto" option for root device
	[May 1 02:19] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.754628] systemd-fstab-generator[3215]: Ignoring "noauto" option for root device
	[ +14.125843] systemd-fstab-generator[3518]: Ignoring "noauto" option for root device
	[  +0.076849] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.077827] systemd-fstab-generator[3530]: Ignoring "noauto" option for root device
	[  +0.188600] systemd-fstab-generator[3544]: Ignoring "noauto" option for root device
	[  +0.171319] systemd-fstab-generator[3556]: Ignoring "noauto" option for root device
	[  +0.356766] systemd-fstab-generator[3585]: Ignoring "noauto" option for root device
	[  +1.365998] systemd-fstab-generator[3741]: Ignoring "noauto" option for root device
	[ +10.881538] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.346698] kauditd_printk_skb: 17 callbacks suppressed
	[  +1.027943] systemd-fstab-generator[4273]: Ignoring "noauto" option for root device
	[  +4.180252] kauditd_printk_skb: 36 callbacks suppressed
	[May 1 02:20] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	[ +34.495701] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [3b377dde86d267c8742b885c6b59382115c63d70d37c1823e0e1d10f97eff8b3] <==
	{"level":"info","ts":"2024-05-01T02:19:44.776714Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:19:44.77674Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:19:44.777129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b switched to configuration voters=(8441320971333687067)"}
	{"level":"info","ts":"2024-05-01T02:19:44.777351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","added-peer-id":"752598b30b66571b","added-peer-peer-urls":["https://192.168.39.209:2380"]}
	{"level":"info","ts":"2024-05-01T02:19:44.777547Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:19:44.777589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:19:44.781098Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T02:19:44.781692Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"752598b30b66571b","initial-advertise-peer-urls":["https://192.168.39.209:2380"],"listen-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.209:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T02:19:44.781836Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T02:19:44.782391Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.782447Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:46.149524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.152677Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:functional-167406 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T02:19:46.152701Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:19:46.152914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:19:46.153408Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T02:19:46.153471Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T02:19:46.155829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-05-01T02:19:46.156978Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339] <==
	{"level":"info","ts":"2024-05-01T02:18:47.383086Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:18:48.759417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.767118Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:18:48.767067Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:functional-167406 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T02:18:48.768075Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:18:48.768693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T02:18:48.768883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T02:18:48.769381Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-05-01T02:18:48.770832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T02:19:44.172843Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-01T02:19:44.172953Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-167406","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	{"level":"warn","ts":"2024-05-01T02:19:44.173117Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.17315Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.175169Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.175192Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-01T02:19:44.175362Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"752598b30b66571b","current-leader-member-id":"752598b30b66571b"}
	{"level":"info","ts":"2024-05-01T02:19:44.178843Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.179043Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.179065Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-167406","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	
	
	==> kernel <==
	 02:20:45 up 3 min,  0 users,  load average: 0.78, 0.49, 0.20
	Linux functional-167406 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-controller-manager [350765a60a82586dd2a69686a601b5d16ad68d05a64cd6e4d3359df1866500b5] <==
	I0501 02:20:05.561099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.912µs"
	I0501 02:20:05.565885       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 02:20:05.569741       1 shared_informer.go:320] Caches are synced for service account
	I0501 02:20:05.578368       1 shared_informer.go:320] Caches are synced for HPA
	I0501 02:20:05.580839       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 02:20:05.583366       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 02:20:05.584712       1 shared_informer.go:320] Caches are synced for GC
	I0501 02:20:05.590141       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 02:20:05.596584       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 02:20:05.600223       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 02:20:05.602715       1 shared_informer.go:320] Caches are synced for job
	I0501 02:20:05.605865       1 shared_informer.go:320] Caches are synced for deployment
	I0501 02:20:05.608288       1 shared_informer.go:320] Caches are synced for disruption
	I0501 02:20:05.634366       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 02:20:05.663770       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 02:20:05.752163       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:20:05.763685       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:20:06.213812       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:20:06.228479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:20:06.228527       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	E0501 02:20:35.765362       1 resource_quota_controller.go:440] failed to discover resources: Get "https://192.168.39.209:8441/api": dial tcp 192.168.39.209:8441: connect: connection refused
	I0501 02:20:36.215716       1 garbagecollector.go:828] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.168.39.209:8441/api\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:20:45.539465       1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.39.209:8441/api/v1/nodes/functional-167406/status\": dial tcp 192.168.39.209:8441: connect: connection refused" logger="node-lifecycle-controller" node="functional-167406"
	E0501 02:20:45.539784       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="functional-167406"
	E0501 02:20:45.539857       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.209:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused" logger="node-lifecycle-controller" node=""
	
	
	==> kube-controller-manager [ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54] <==
	I0501 02:19:13.936373       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 02:19:13.936390       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 02:19:13.940386       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 02:19:13.942716       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 02:19:13.946741       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 02:19:13.949349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.775495ms"
	I0501 02:19:13.950927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.553µs"
	I0501 02:19:13.969177       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 02:19:13.975817       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 02:19:13.985573       1 shared_informer.go:320] Caches are synced for TTL
	I0501 02:19:13.986878       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 02:19:13.991538       1 shared_informer.go:320] Caches are synced for node
	I0501 02:19:13.991869       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 02:19:13.992064       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 02:19:13.992201       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 02:19:13.992333       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 02:19:14.022008       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 02:19:14.035151       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 02:19:14.043403       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:19:14.068572       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:19:14.086442       1 shared_informer.go:320] Caches are synced for disruption
	I0501 02:19:14.135817       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 02:19:14.567440       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:19:14.602838       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:19:14.602885       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [6df6abb34b88dfeaae1f93d6a23cfc1748633884bc829df09c3047477d7f424c] <==
	I0501 02:19:44.730099       1 server_linux.go:69] "Using iptables proxy"
	E0501 02:19:44.732063       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:19:45.813700       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:19:47.982154       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	I0501 02:19:53.031359       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.209"]
	I0501 02:19:53.089991       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:19:53.090036       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:19:53.090052       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:19:53.094508       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:19:53.095319       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:19:53.095716       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:19:53.097123       1 config.go:192] "Starting service config controller"
	I0501 02:19:53.097468       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:19:53.097670       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:19:53.097907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:19:53.098658       1 config.go:319] "Starting node config controller"
	I0501 02:19:53.101299       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:19:53.198633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:19:53.198675       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:19:53.201407       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f] <==
	I0501 02:18:49.135475       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0501 02:18:49.135542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.135602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.135935       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.209:8441: connect: connection refused"
	W0501 02:18:49.960987       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.961201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:50.247414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:50.247829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:50.353906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:50.354334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.351893       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.352039       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.513544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.513603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.774168       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.774360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:55.789131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:55.789541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.962943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.962985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:58.352087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:58.352161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	I0501 02:19:06.033778       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:19:07.236470       1 shared_informer.go:320] Caches are synced for node config
	I0501 02:19:08.934441       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3] <==
	E0501 02:18:57.123850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.195323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.195395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.309765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.309834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.470763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.470798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.772512       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.772548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.804749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.804779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.886920       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.886982       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.929219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.929386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.978490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.978527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:58.311728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:58.311770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:00.939844       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:19:00.939973       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:19:01.688744       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0501 02:19:09.088531       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 02:19:12.088779       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	E0501 02:19:44.107636       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a513f3286b775a1c5c742fd0ac19b8fa8a6ee5129122ad75de1496bed6278d1f] <==
	W0501 02:19:49.143896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.143978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.351289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.351443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.596848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.209:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.596882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.209:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.654875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.654916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.674532       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.674621       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.791451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.791485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.859678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.859751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.074783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.074851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.174913       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.209:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.174963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.209:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.183651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.183678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.386329       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.386369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:52.969018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 02:19:52.970815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 02:19:54.216441       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 02:20:33 functional-167406 kubelet[4280]: E0501 02:20:33.994379    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:33 functional-167406 kubelet[4280]: E0501 02:20:33.994944    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:33 functional-167406 kubelet[4280]: E0501 02:20:33.995532    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:33 functional-167406 kubelet[4280]: E0501 02:20:33.995631    4280 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 01 02:20:39 functional-167406 kubelet[4280]: E0501 02:20:39.847743    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="7s"
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.827691    4280 scope.go:117] "RemoveContainer" containerID="52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d"
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.909221    4280 status_manager.go:853] "Failed to get status for pod" podUID="f9f7ede5128b64464fffeeb6b7a159f5" pod="kube-system/kube-apiserver-functional-167406" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951200    4280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-ca-certs\") pod \"f9f7ede5128b64464fffeeb6b7a159f5\" (UID: \"f9f7ede5128b64464fffeeb6b7a159f5\") "
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951330    4280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-k8s-certs\") pod \"f9f7ede5128b64464fffeeb6b7a159f5\" (UID: \"f9f7ede5128b64464fffeeb6b7a159f5\") "
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951353    4280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-usr-share-ca-certificates\") pod \"f9f7ede5128b64464fffeeb6b7a159f5\" (UID: \"f9f7ede5128b64464fffeeb6b7a159f5\") "
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951436    4280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-usr-share-ca-certificates" (OuterVolumeSpecName: "usr-share-ca-certificates") pod "f9f7ede5128b64464fffeeb6b7a159f5" (UID: "f9f7ede5128b64464fffeeb6b7a159f5"). InnerVolumeSpecName "usr-share-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951512    4280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-k8s-certs" (OuterVolumeSpecName: "k8s-certs") pod "f9f7ede5128b64464fffeeb6b7a159f5" (UID: "f9f7ede5128b64464fffeeb6b7a159f5"). InnerVolumeSpecName "k8s-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951550    4280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "f9f7ede5128b64464fffeeb6b7a159f5" (UID: "f9f7ede5128b64464fffeeb6b7a159f5"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.051954    4280 reconciler_common.go:289] "Volume detached for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-usr-share-ca-certificates\") on node \"functional-167406\" DevicePath \"\""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.052001    4280 reconciler_common.go:289] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-ca-certs\") on node \"functional-167406\" DevicePath \"\""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.052014    4280 reconciler_common.go:289] "Volume detached for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-k8s-certs\") on node \"functional-167406\" DevicePath \"\""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.832454    4280 scope.go:117] "RemoveContainer" containerID="429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5"
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.835971    4280 status_manager.go:853] "Failed to get status for pod" podUID="f9f7ede5128b64464fffeeb6b7a159f5" pod="kube-system/kube-apiserver-functional-167406" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:42 functional-167406 kubelet[4280]: I0501 02:20:42.602586    4280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f7ede5128b64464fffeeb6b7a159f5" path="/var/lib/kubelet/pods/f9f7ede5128b64464fffeeb6b7a159f5/volumes"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.057965    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.058888    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.059562    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.060306    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.060896    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.060989    4280 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	
	
	==> storage-provisioner [ae6f4e38ab4f3bee5d7e47c976761288d60a681e7e951889c3578e892750495b] <==
	I0501 02:20:08.757073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 02:20:08.772588       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 02:20:08.772654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0501 02:20:12.228155       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:16.487066       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:20.083198       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:23.134350       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:26.154932       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:29.804693       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:31.962826       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:34.340046       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:36.574219       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:39.297992       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:42.535338       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965] <==
	I0501 02:19:54.061102       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0501 02:19:54.064135       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-167406 -n functional-167406
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-167406 -n functional-167406: exit status 2 (246.390728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-167406" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (16.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-167406 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1625: (dbg) Non-zero exit: kubectl --context functional-167406 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: exit status 1 (44.687534ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.39.209:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.39.209:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-167406 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-167406 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-167406 describe po hello-node-connect: exit status 1 (47.318215ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.39.209:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-167406 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-167406 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-167406 logs -l app=hello-node-connect: exit status 1 (44.884353ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.39.209:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-167406 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-167406 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-167406 describe svc hello-node-connect: exit status 1 (45.763344ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.39.209:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-167406 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-167406 -n functional-167406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-167406 -n functional-167406: exit status 2 (13.895331753s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 logs -n 25: (1.817557768s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service   | functional-167406 service list                                           | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	| service   | functional-167406 service list                                           | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | -o json                                                                  |                   |         |         |                     |                     |
	| service   | functional-167406 service                                                | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | --namespace=default --https                                              |                   |         |         |                     |                     |
	|           | --url hello-node                                                         |                   |         |         |                     |                     |
	| service   | functional-167406                                                        | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | service hello-node --url                                                 |                   |         |         |                     |                     |
	|           | --format={{.IP}}                                                         |                   |         |         |                     |                     |
	| service   | functional-167406 service                                                | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | hello-node --url                                                         |                   |         |         |                     |                     |
	| mount     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdany-port205709386/001:/mount-9p       |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh findmnt                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh findmnt                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh -- ls                                              | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh cat                                                | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | /mount-9p/test-1714530043905066418                                       |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh mount |                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | grep 9p; ls -la /mount-9p; cat                                           |                   |         |         |                     |                     |
	|           | /mount-9p/pod-dates                                                      |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh sudo                                               | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdspecific-port3491307736/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh findmnt                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh findmnt                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| start     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh -- ls                                              | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| start     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=kvm2                                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | --dry-run --alsologtostderr                                              |                   |         |         |                     |                     |
	|           | -v=1 --driver=kvm2                                                       |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                       | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | -p functional-167406                                                     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh sudo                                               | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-167406 ssh findmnt                                            | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | -T /mount1                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-167406                                                     | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:20:47
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:20:47.108515   29902 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:20:47.108738   29902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:20:47.108746   29902 out.go:304] Setting ErrFile to fd 2...
	I0501 02:20:47.108749   29902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:20:47.108928   29902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 02:20:47.109434   29902 out.go:298] Setting JSON to false
	I0501 02:20:47.110356   29902 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3789,"bootTime":1714526258,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:20:47.110449   29902 start.go:139] virtualization: kvm guest
	I0501 02:20:47.112460   29902 out.go:177] * [functional-167406] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:20:47.113759   29902 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:20:47.113783   29902 notify.go:220] Checking for updates...
	I0501 02:20:47.114988   29902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:20:47.116265   29902 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:20:47.117748   29902 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	I0501 02:20:47.119176   29902 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:20:47.120838   29902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:20:47.122715   29902 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:20:47.123148   29902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:47.123186   29902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:47.138008   29902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I0501 02:20:47.138368   29902 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:47.138892   29902 main.go:141] libmachine: Using API Version  1
	I0501 02:20:47.138915   29902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:47.139252   29902 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:47.139411   29902 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:47.139669   29902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:20:47.140000   29902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:47.140035   29902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:47.153963   29902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I0501 02:20:47.154335   29902 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:47.154739   29902 main.go:141] libmachine: Using API Version  1
	I0501 02:20:47.154758   29902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:47.155018   29902 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:47.155295   29902 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:47.187947   29902 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 02:20:47.189302   29902 start.go:297] selected driver: kvm2
	I0501 02:20:47.189333   29902 start.go:901] validating driver "kvm2" against &{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:20:47.189474   29902 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:20:47.190836   29902 cni.go:84] Creating CNI manager for ""
	I0501 02:20:47.190860   29902 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:20:47.190940   29902 start.go:340] cluster config:
	{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:20:47.192616   29902 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ae6f4e38ab4f3       6e38f40d628db       40 seconds ago       Running             storage-provisioner       4                   d3f41e0f975da       storage-provisioner
	ef9868f7ee3c3       cbb01a7bd410d       55 seconds ago       Running             coredns                   2                   2132b99b3eb2c       coredns-7db6d8ff4d-xv8bs
	b8e78e9b1aa3a       6e38f40d628db       55 seconds ago       Exited              storage-provisioner       3                   d3f41e0f975da       storage-provisioner
	350765a60a825       c7aad43836fa5       58 seconds ago       Running             kube-controller-manager   2                   fec06a36743b8       kube-controller-manager-functional-167406
	a513f3286b775       259c8277fcbbc       About a minute ago   Running             kube-scheduler            2                   a3c933aaaf5a9       kube-scheduler-functional-167406
	3b377dde86d26       3861cfcd7c04c       About a minute ago   Running             etcd                      2                   bdca39c10acda       etcd-functional-167406
	6df6abb34b88d       a0bf559e280cf       About a minute ago   Running             kube-proxy                2                   13168bbfbe961       kube-proxy-xbtf9
	ebe11aa9f8804       c7aad43836fa5       About a minute ago   Exited              kube-controller-manager   1                   fec06a36743b8       kube-controller-manager-functional-167406
	939e53f1e1db0       259c8277fcbbc       2 minutes ago        Exited              kube-scheduler            1                   a3c933aaaf5a9       kube-scheduler-functional-167406
	a1f43ae8da4b3       3861cfcd7c04c       2 minutes ago        Exited              etcd                      1                   bdca39c10acda       etcd-functional-167406
	f0dc76865d087       a0bf559e280cf       2 minutes ago        Exited              kube-proxy                1                   13168bbfbe961       kube-proxy-xbtf9
	5652211ff7b29       cbb01a7bd410d       2 minutes ago        Exited              coredns                   1                   2132b99b3eb2c       coredns-7db6d8ff4d-xv8bs
	
	
	==> containerd <==
	May 01 02:20:29 functional-167406 containerd[3593]: time="2024-05-01T02:20:29.241904733Z" level=info msg="ImageUpdate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.351940393Z" level=info msg="RemoveImage \"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.355343732Z" level=info msg="ImageDelete event name:\"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.358514194Z" level=info msg="ImageDelete event name:\"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91\""
	May 01 02:20:31 functional-167406 containerd[3593]: time="2024-05-01T02:20:31.405660092Z" level=info msg="RemoveImage \"gcr.io/google-containers/addon-resizer:functional-167406\" returns successfully"
	May 01 02:20:32 functional-167406 containerd[3593]: time="2024-05-01T02:20:32.293045880Z" level=info msg="ImageCreate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:32 functional-167406 containerd[3593]: time="2024-05-01T02:20:32.300943395Z" level=info msg="ImageCreate event name:\"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:32 functional-167406 containerd[3593]: time="2024-05-01T02:20:32.301676364Z" level=info msg="ImageUpdate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.570645100Z" level=info msg="Kill container \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.780064388Z" level=info msg="shim disconnected" id=429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.782807926Z" level=warning msg="cleaning up after shim disconnected" id=429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.783846218Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.819429102Z" level=info msg="StopContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" returns successfully"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.821868417Z" level=info msg="StopPodSandbox for \"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.822023270Z" level=info msg="Container to stop \"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.822457643Z" level=info msg="Container to stop \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.830076878Z" level=info msg="RemoveContainer for \"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d\""
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.839962622Z" level=info msg="RemoveContainer for \"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d\" returns successfully"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.878876102Z" level=info msg="shim disconnected" id=88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.878949793Z" level=warning msg="cleaning up after shim disconnected" id=88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.878962325Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.904351844Z" level=info msg="TearDown network for sandbox \"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871\" successfully"
	May 01 02:20:40 functional-167406 containerd[3593]: time="2024-05-01T02:20:40.904408181Z" level=info msg="StopPodSandbox for \"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871\" returns successfully"
	May 01 02:20:41 functional-167406 containerd[3593]: time="2024-05-01T02:20:41.834529273Z" level=info msg="RemoveContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\""
	May 01 02:20:41 functional-167406 containerd[3593]: time="2024-05-01T02:20:41.840826931Z" level=info msg="RemoveContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" returns successfully"
	
	
	==> coredns [5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43474 - 46251 "HINFO IN 6093638740258044659.1554125567718258750. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008772047s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: unknown (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: unknown (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ef9868f7ee3c37e5d0905ec5f86a854f4d72fd6fa06197f96f693fcc6e53a485] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51551 - 12396 "HINFO IN 7161565364375486857.4859467522399385342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006762819s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +32.076735] systemd-fstab-generator[2180]: Ignoring "noauto" option for root device
	[  +0.169403] systemd-fstab-generator[2192]: Ignoring "noauto" option for root device
	[  +0.211042] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[  +0.165983] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +0.323845] systemd-fstab-generator[2247]: Ignoring "noauto" option for root device
	[  +2.137091] systemd-fstab-generator[2452]: Ignoring "noauto" option for root device
	[  +0.094208] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.831325] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.516674] kauditd_printk_skb: 14 callbacks suppressed
	[  +1.457832] systemd-fstab-generator[3047]: Ignoring "noauto" option for root device
	[May 1 02:19] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.754628] systemd-fstab-generator[3215]: Ignoring "noauto" option for root device
	[ +14.125843] systemd-fstab-generator[3518]: Ignoring "noauto" option for root device
	[  +0.076849] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.077827] systemd-fstab-generator[3530]: Ignoring "noauto" option for root device
	[  +0.188600] systemd-fstab-generator[3544]: Ignoring "noauto" option for root device
	[  +0.171319] systemd-fstab-generator[3556]: Ignoring "noauto" option for root device
	[  +0.356766] systemd-fstab-generator[3585]: Ignoring "noauto" option for root device
	[  +1.365998] systemd-fstab-generator[3741]: Ignoring "noauto" option for root device
	[ +10.881538] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.346698] kauditd_printk_skb: 17 callbacks suppressed
	[  +1.027943] systemd-fstab-generator[4273]: Ignoring "noauto" option for root device
	[  +4.180252] kauditd_printk_skb: 36 callbacks suppressed
	[May 1 02:20] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	[ +34.495701] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [3b377dde86d267c8742b885c6b59382115c63d70d37c1823e0e1d10f97eff8b3] <==
	{"level":"info","ts":"2024-05-01T02:19:44.776714Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:19:44.77674Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:19:44.777129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b switched to configuration voters=(8441320971333687067)"}
	{"level":"info","ts":"2024-05-01T02:19:44.777351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","added-peer-id":"752598b30b66571b","added-peer-peer-urls":["https://192.168.39.209:2380"]}
	{"level":"info","ts":"2024-05-01T02:19:44.777547Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:19:44.777589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:19:44.781098Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T02:19:44.781692Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"752598b30b66571b","initial-advertise-peer-urls":["https://192.168.39.209:2380"],"listen-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.209:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T02:19:44.781836Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T02:19:44.782391Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.782447Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:46.149524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.152677Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:functional-167406 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T02:19:46.152701Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:19:46.152914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:19:46.153408Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T02:19:46.153471Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T02:19:46.155829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-05-01T02:19:46.156978Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339] <==
	{"level":"info","ts":"2024-05-01T02:18:47.383086Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:18:48.759417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.767118Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:18:48.767067Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:functional-167406 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T02:18:48.768075Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:18:48.768693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T02:18:48.768883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T02:18:48.769381Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-05-01T02:18:48.770832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T02:19:44.172843Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-01T02:19:44.172953Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-167406","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	{"level":"warn","ts":"2024-05-01T02:19:44.173117Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.17315Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.175169Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.175192Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-01T02:19:44.175362Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"752598b30b66571b","current-leader-member-id":"752598b30b66571b"}
	{"level":"info","ts":"2024-05-01T02:19:44.178843Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.179043Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.179065Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-167406","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	
	
	==> kernel <==
	 02:20:49 up 3 min,  0 users,  load average: 0.72, 0.48, 0.20
	Linux functional-167406 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-controller-manager [350765a60a82586dd2a69686a601b5d16ad68d05a64cd6e4d3359df1866500b5] <==
	I0501 02:20:05.561099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.912µs"
	I0501 02:20:05.565885       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 02:20:05.569741       1 shared_informer.go:320] Caches are synced for service account
	I0501 02:20:05.578368       1 shared_informer.go:320] Caches are synced for HPA
	I0501 02:20:05.580839       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 02:20:05.583366       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 02:20:05.584712       1 shared_informer.go:320] Caches are synced for GC
	I0501 02:20:05.590141       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 02:20:05.596584       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 02:20:05.600223       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 02:20:05.602715       1 shared_informer.go:320] Caches are synced for job
	I0501 02:20:05.605865       1 shared_informer.go:320] Caches are synced for deployment
	I0501 02:20:05.608288       1 shared_informer.go:320] Caches are synced for disruption
	I0501 02:20:05.634366       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 02:20:05.663770       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 02:20:05.752163       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:20:05.763685       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:20:06.213812       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:20:06.228479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:20:06.228527       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	E0501 02:20:35.765362       1 resource_quota_controller.go:440] failed to discover resources: Get "https://192.168.39.209:8441/api": dial tcp 192.168.39.209:8441: connect: connection refused
	I0501 02:20:36.215716       1 garbagecollector.go:828] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.168.39.209:8441/api\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:20:45.539465       1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.39.209:8441/api/v1/nodes/functional-167406/status\": dial tcp 192.168.39.209:8441: connect: connection refused" logger="node-lifecycle-controller" node="functional-167406"
	E0501 02:20:45.539784       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="functional-167406"
	E0501 02:20:45.539857       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.209:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused" logger="node-lifecycle-controller" node=""
	
	
	==> kube-controller-manager [ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54] <==
	I0501 02:19:13.936373       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 02:19:13.936390       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 02:19:13.940386       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 02:19:13.942716       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 02:19:13.946741       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 02:19:13.949349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.775495ms"
	I0501 02:19:13.950927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.553µs"
	I0501 02:19:13.969177       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 02:19:13.975817       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 02:19:13.985573       1 shared_informer.go:320] Caches are synced for TTL
	I0501 02:19:13.986878       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 02:19:13.991538       1 shared_informer.go:320] Caches are synced for node
	I0501 02:19:13.991869       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 02:19:13.992064       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 02:19:13.992201       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 02:19:13.992333       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 02:19:14.022008       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 02:19:14.035151       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 02:19:14.043403       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:19:14.068572       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:19:14.086442       1 shared_informer.go:320] Caches are synced for disruption
	I0501 02:19:14.135817       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 02:19:14.567440       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:19:14.602838       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:19:14.602885       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [6df6abb34b88dfeaae1f93d6a23cfc1748633884bc829df09c3047477d7f424c] <==
	I0501 02:19:44.730099       1 server_linux.go:69] "Using iptables proxy"
	E0501 02:19:44.732063       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:19:45.813700       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:19:47.982154       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	I0501 02:19:53.031359       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.209"]
	I0501 02:19:53.089991       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:19:53.090036       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:19:53.090052       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:19:53.094508       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:19:53.095319       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:19:53.095716       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:19:53.097123       1 config.go:192] "Starting service config controller"
	I0501 02:19:53.097468       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:19:53.097670       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:19:53.097907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:19:53.098658       1 config.go:319] "Starting node config controller"
	I0501 02:19:53.101299       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:19:53.198633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:19:53.198675       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:19:53.201407       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f] <==
	I0501 02:18:49.135475       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0501 02:18:49.135542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.135602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.135935       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.209:8441: connect: connection refused"
	W0501 02:18:49.960987       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.961201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:50.247414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:50.247829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:50.353906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:50.354334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.351893       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.352039       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.513544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.513603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.774168       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.774360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:55.789131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:55.789541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.962943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.962985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:58.352087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:58.352161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	I0501 02:19:06.033778       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:19:07.236470       1 shared_informer.go:320] Caches are synced for node config
	I0501 02:19:08.934441       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3] <==
	E0501 02:18:57.123850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.195323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.195395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.309765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.309834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.470763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.470798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.772512       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.772548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.804749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.804779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.886920       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.886982       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.929219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.929386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.978490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.978527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:58.311728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:58.311770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:00.939844       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:19:00.939973       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:19:01.688744       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0501 02:19:09.088531       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 02:19:12.088779       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	E0501 02:19:44.107636       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a513f3286b775a1c5c742fd0ac19b8fa8a6ee5129122ad75de1496bed6278d1f] <==
	W0501 02:19:49.143896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.143978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.351289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.351443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.596848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.209:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.596882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.209:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.654875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.654916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.674532       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.674621       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.791451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.791485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.859678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.859751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.074783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.074851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.174913       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.209:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.174963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.209:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.183651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.183678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.386329       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.386369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:52.969018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 02:19:52.970815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 02:19:54.216441       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 02:20:33 functional-167406 kubelet[4280]: E0501 02:20:33.994944    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:33 functional-167406 kubelet[4280]: E0501 02:20:33.995532    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:33 functional-167406 kubelet[4280]: E0501 02:20:33.995631    4280 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 01 02:20:39 functional-167406 kubelet[4280]: E0501 02:20:39.847743    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="7s"
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.827691    4280 scope.go:117] "RemoveContainer" containerID="52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d"
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.909221    4280 status_manager.go:853] "Failed to get status for pod" podUID="f9f7ede5128b64464fffeeb6b7a159f5" pod="kube-system/kube-apiserver-functional-167406" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951200    4280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-ca-certs\") pod \"f9f7ede5128b64464fffeeb6b7a159f5\" (UID: \"f9f7ede5128b64464fffeeb6b7a159f5\") "
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951330    4280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-k8s-certs\") pod \"f9f7ede5128b64464fffeeb6b7a159f5\" (UID: \"f9f7ede5128b64464fffeeb6b7a159f5\") "
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951353    4280 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-usr-share-ca-certificates\") pod \"f9f7ede5128b64464fffeeb6b7a159f5\" (UID: \"f9f7ede5128b64464fffeeb6b7a159f5\") "
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951436    4280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-usr-share-ca-certificates" (OuterVolumeSpecName: "usr-share-ca-certificates") pod "f9f7ede5128b64464fffeeb6b7a159f5" (UID: "f9f7ede5128b64464fffeeb6b7a159f5"). InnerVolumeSpecName "usr-share-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951512    4280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-k8s-certs" (OuterVolumeSpecName: "k8s-certs") pod "f9f7ede5128b64464fffeeb6b7a159f5" (UID: "f9f7ede5128b64464fffeeb6b7a159f5"). InnerVolumeSpecName "k8s-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 01 02:20:40 functional-167406 kubelet[4280]: I0501 02:20:40.951550    4280 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "f9f7ede5128b64464fffeeb6b7a159f5" (UID: "f9f7ede5128b64464fffeeb6b7a159f5"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.051954    4280 reconciler_common.go:289] "Volume detached for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-usr-share-ca-certificates\") on node \"functional-167406\" DevicePath \"\""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.052001    4280 reconciler_common.go:289] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-ca-certs\") on node \"functional-167406\" DevicePath \"\""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.052014    4280 reconciler_common.go:289] "Volume detached for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9f7ede5128b64464fffeeb6b7a159f5-k8s-certs\") on node \"functional-167406\" DevicePath \"\""
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.832454    4280 scope.go:117] "RemoveContainer" containerID="429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5"
	May 01 02:20:41 functional-167406 kubelet[4280]: I0501 02:20:41.835971    4280 status_manager.go:853] "Failed to get status for pod" podUID="f9f7ede5128b64464fffeeb6b7a159f5" pod="kube-system/kube-apiserver-functional-167406" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:42 functional-167406 kubelet[4280]: I0501 02:20:42.602586    4280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f7ede5128b64464fffeeb6b7a159f5" path="/var/lib/kubelet/pods/f9f7ede5128b64464fffeeb6b7a159f5/volumes"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.057965    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.058888    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.059562    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.060306    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.060896    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:44 functional-167406 kubelet[4280]: E0501 02:20:44.060989    4280 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 01 02:20:46 functional-167406 kubelet[4280]: E0501 02:20:46.849404    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="7s"
	
	
	==> storage-provisioner [ae6f4e38ab4f3bee5d7e47c976761288d60a681e7e951889c3578e892750495b] <==
	I0501 02:20:08.757073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 02:20:08.772588       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 02:20:08.772654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0501 02:20:12.228155       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:16.487066       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:20.083198       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:23.134350       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:26.154932       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:29.804693       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:31.962826       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:34.340046       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:36.574219       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:39.297992       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:42.535338       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:46.489612       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:49.005345       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965] <==
	I0501 02:19:54.061102       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0501 02:19:54.064135       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-167406 -n functional-167406
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-167406 -n functional-167406: exit status 2 (269.7501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-167406" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (16.17s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-167406 replace --force -f testdata/mysql.yaml
functional_test.go:1789: (dbg) Non-zero exit: kubectl --context functional-167406 replace --force -f testdata/mysql.yaml: exit status 1 (48.026273ms)

                                                
                                                
** stderr ** 
	error when deleting "testdata/mysql.yaml": Delete "https://192.168.39.209:8441/api/v1/namespaces/default/services/mysql": dial tcp 192.168.39.209:8441: connect: connection refused
	error when deleting "testdata/mysql.yaml": Delete "https://192.168.39.209:8441/apis/apps/v1/namespaces/default/deployments/mysql": dial tcp 192.168.39.209:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1791: failed to kubectl replace mysql: args "kubectl --context functional-167406 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-167406 -n functional-167406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-167406 -n functional-167406: exit status 2 (13.538607823s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 logs -n 25: (2.259095589s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| config  | functional-167406 config set                               | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | cpus 2                                                     |                   |         |         |                     |                     |
	| config  | functional-167406 config get                               | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | cpus                                                       |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo                                 | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | systemctl is-active crio                                   |                   |         |         |                     |                     |
	| config  | functional-167406 config unset                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | cpus                                                       |                   |         |         |                     |                     |
	| config  | functional-167406 config get                               | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | cpus                                                       |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/ssl/certs/20785.pem                                   |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /usr/share/ca-certificates/20785.pem                       |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/test/nested/copy/20785/hosts                          |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/ssl/certs/51391683.0                                  |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/ssl/certs/207852.pem                                  |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /usr/share/ca-certificates/207852.pem                      |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/ssl/certs/3ec20f2e.0                                  |                   |         |         |                     |                     |
	| cp      | functional-167406 cp                                       | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | testdata/cp-test.txt                                       |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                   |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh -n                                   | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | functional-167406 sudo cat                                 |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                   |                   |         |         |                     |                     |
	| cp      | functional-167406 cp                                       | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | functional-167406:/home/docker/cp-test.txt                 |                   |         |         |                     |                     |
	|         | /tmp/TestFunctionalparallelCpCmd2548486059/001/cp-test.txt |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh -n                                   | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | functional-167406 sudo cat                                 |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                   |                   |         |         |                     |                     |
	| cp      | functional-167406 cp                                       | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | testdata/cp-test.txt                                       |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                            |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh -n                                   | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | functional-167406 sudo cat                                 |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                            |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh echo                                 | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | hello                                                      |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh cat                                  | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/hostname                                              |                   |         |         |                     |                     |
	| image   | functional-167406 image load --daemon                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-167406   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                          |                   |         |         |                     |                     |
	| image   | functional-167406 image ls                                 | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	| image   | functional-167406 image load --daemon                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-167406   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                          |                   |         |         |                     |                     |
	| image   | functional-167406 image ls                                 | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	| image   | functional-167406 image load --daemon                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | gcr.io/google-containers/addon-resizer:functional-167406   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                          |                   |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:19:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:19:29.437826   27302 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:19:29.438165   27302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:19:29.438221   27302 out.go:304] Setting ErrFile to fd 2...
	I0501 02:19:29.438230   27302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:19:29.438701   27302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 02:19:29.439585   27302 out.go:298] Setting JSON to false
	I0501 02:19:29.440532   27302 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3711,"bootTime":1714526258,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:19:29.440583   27302 start.go:139] virtualization: kvm guest
	I0501 02:19:29.442564   27302 out.go:177] * [functional-167406] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:19:29.444360   27302 notify.go:220] Checking for updates...
	I0501 02:19:29.444368   27302 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:19:29.445648   27302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:19:29.447273   27302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:19:29.448681   27302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	I0501 02:19:29.449982   27302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:19:29.451239   27302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:19:29.452846   27302 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:19:29.452913   27302 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:19:29.453282   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:19:29.453328   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:19:29.467860   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0501 02:19:29.468232   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:19:29.468835   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:19:29.468843   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:19:29.469189   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:19:29.469423   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:29.500693   27302 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 02:19:29.502118   27302 start.go:297] selected driver: kvm2
	I0501 02:19:29.502122   27302 start.go:901] validating driver "kvm2" against &{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:19:29.502238   27302 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:19:29.502533   27302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:19:29.502594   27302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13407/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:19:29.516334   27302 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:19:29.516947   27302 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:19:29.516997   27302 cni.go:84] Creating CNI manager for ""
	I0501 02:19:29.517005   27302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:19:29.517051   27302 start.go:340] cluster config:
	{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:19:29.517150   27302 iso.go:125] acquiring lock: {Name:mk2f0fca3713b9e2ec58748a6d2af30df1faa5ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:19:29.518781   27302 out.go:177] * Starting "functional-167406" primary control-plane node in "functional-167406" cluster
	I0501 02:19:29.519852   27302 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0501 02:19:29.519871   27302 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13407/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	I0501 02:19:29.519876   27302 cache.go:56] Caching tarball of preloaded images
	I0501 02:19:29.519929   27302 preload.go:173] Found /home/jenkins/minikube-integration/18779-13407/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:19:29.519935   27302 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on containerd
	I0501 02:19:29.520013   27302 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/config.json ...
	I0501 02:19:29.520168   27302 start.go:360] acquireMachinesLock for functional-167406: {Name:mkdc802449570b9ab245fcfdfa79580f6e5fb7ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:19:29.520199   27302 start.go:364] duration metric: took 21.879µs to acquireMachinesLock for "functional-167406"
	I0501 02:19:29.520208   27302 start.go:96] Skipping create...Using existing machine configuration
	I0501 02:19:29.520211   27302 fix.go:54] fixHost starting: 
	I0501 02:19:29.520447   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:19:29.520486   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:19:29.533583   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0501 02:19:29.533931   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:19:29.534437   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:19:29.534450   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:19:29.534783   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:19:29.534968   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:29.535081   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:19:29.536552   27302 fix.go:112] recreateIfNeeded on functional-167406: state=Running err=<nil>
	W0501 02:19:29.536561   27302 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 02:19:29.538271   27302 out.go:177] * Updating the running kvm2 "functional-167406" VM ...
	I0501 02:19:29.539520   27302 machine.go:94] provisionDockerMachine start ...
	I0501 02:19:29.539539   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:29.539733   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.541923   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.542256   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.542296   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.542428   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:29.542582   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.542731   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.542827   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:29.542960   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:29.543168   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:29.543175   27302 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:19:29.655744   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-167406
	
	I0501 02:19:29.655773   27302 main.go:141] libmachine: (functional-167406) Calling .GetMachineName
	I0501 02:19:29.655991   27302 buildroot.go:166] provisioning hostname "functional-167406"
	I0501 02:19:29.656006   27302 main.go:141] libmachine: (functional-167406) Calling .GetMachineName
	I0501 02:19:29.656190   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.658663   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.659033   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.659051   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.659173   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:29.659306   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.659396   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.659522   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:29.659654   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:29.659806   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:29.659812   27302 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-167406 && echo "functional-167406" | sudo tee /etc/hostname
	I0501 02:19:29.787678   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-167406
	
	I0501 02:19:29.787698   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.790278   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.790574   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.790592   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.790738   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:29.790915   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.791052   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.791179   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:29.791296   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:29.791539   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:29.791556   27302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-167406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-167406/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-167406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:19:29.904529   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:19:29.904545   27302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13407/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13407/.minikube}
	I0501 02:19:29.904577   27302 buildroot.go:174] setting up certificates
	I0501 02:19:29.904585   27302 provision.go:84] configureAuth start
	I0501 02:19:29.904595   27302 main.go:141] libmachine: (functional-167406) Calling .GetMachineName
	I0501 02:19:29.904823   27302 main.go:141] libmachine: (functional-167406) Calling .GetIP
	I0501 02:19:29.907376   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.907737   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.907764   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.907905   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.910052   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.910361   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.910376   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.910493   27302 provision.go:143] copyHostCerts
	I0501 02:19:29.910529   27302 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13407/.minikube/ca.pem, removing ...
	I0501 02:19:29.910534   27302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13407/.minikube/ca.pem
	I0501 02:19:29.910593   27302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13407/.minikube/ca.pem (1078 bytes)
	I0501 02:19:29.910685   27302 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13407/.minikube/cert.pem, removing ...
	I0501 02:19:29.910689   27302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13407/.minikube/cert.pem
	I0501 02:19:29.910711   27302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13407/.minikube/cert.pem (1123 bytes)
	I0501 02:19:29.910767   27302 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13407/.minikube/key.pem, removing ...
	I0501 02:19:29.910770   27302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13407/.minikube/key.pem
	I0501 02:19:29.910790   27302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13407/.minikube/key.pem (1675 bytes)
	I0501 02:19:29.910856   27302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13407/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca-key.pem org=jenkins.functional-167406 san=[127.0.0.1 192.168.39.209 functional-167406 localhost minikube]
	I0501 02:19:30.193847   27302 provision.go:177] copyRemoteCerts
	I0501 02:19:30.193886   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:19:30.193910   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.196409   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.196720   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.196739   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.196903   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.197084   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.197230   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.197366   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.287862   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:19:30.315195   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 02:19:30.343749   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:19:30.370719   27302 provision.go:87] duration metric: took 466.124066ms to configureAuth
	I0501 02:19:30.370742   27302 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:19:30.370956   27302 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:19:30.370964   27302 machine.go:97] duration metric: took 831.438029ms to provisionDockerMachine
	I0501 02:19:30.370973   27302 start.go:293] postStartSetup for "functional-167406" (driver="kvm2")
	I0501 02:19:30.370984   27302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:19:30.371006   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.371291   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:19:30.371313   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.373948   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.374280   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.374299   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.374374   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.374561   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.374711   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.374838   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.466722   27302 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:19:30.471556   27302 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:19:30.471568   27302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13407/.minikube/addons for local assets ...
	I0501 02:19:30.471626   27302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13407/.minikube/files for local assets ...
	I0501 02:19:30.471697   27302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem -> 207852.pem in /etc/ssl/certs
	I0501 02:19:30.471754   27302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/test/nested/copy/20785/hosts -> hosts in /etc/test/nested/copy/20785
	I0501 02:19:30.471794   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/20785
	I0501 02:19:30.483601   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem --> /etc/ssl/certs/207852.pem (1708 bytes)
	I0501 02:19:30.512365   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/test/nested/copy/20785/hosts --> /etc/test/nested/copy/20785/hosts (40 bytes)
	I0501 02:19:30.540651   27302 start.go:296] duration metric: took 169.667782ms for postStartSetup
	I0501 02:19:30.540676   27302 fix.go:56] duration metric: took 1.020464256s for fixHost
	I0501 02:19:30.540691   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.543228   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.543544   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.543565   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.543669   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.543818   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.543982   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.544097   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.544279   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:30.544432   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:30.544436   27302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:19:30.656481   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714529970.633363586
	
	I0501 02:19:30.656494   27302 fix.go:216] guest clock: 1714529970.633363586
	I0501 02:19:30.656502   27302 fix.go:229] Guest: 2024-05-01 02:19:30.633363586 +0000 UTC Remote: 2024-05-01 02:19:30.540678287 +0000 UTC m=+1.147555627 (delta=92.685299ms)
	I0501 02:19:30.656535   27302 fix.go:200] guest clock delta is within tolerance: 92.685299ms
	I0501 02:19:30.656541   27302 start.go:83] releasing machines lock for "functional-167406", held for 1.136336978s
	I0501 02:19:30.656561   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.656802   27302 main.go:141] libmachine: (functional-167406) Calling .GetIP
	I0501 02:19:30.659387   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.659782   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.659791   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.659960   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.660461   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.660625   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.660715   27302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:19:30.660744   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.660850   27302 ssh_runner.go:195] Run: cat /version.json
	I0501 02:19:30.660866   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.663221   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.663516   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.663551   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.663568   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.663661   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.663819   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.663959   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.663959   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.663982   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.664155   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.664231   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.664287   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.664383   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.664481   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.745127   27302 ssh_runner.go:195] Run: systemctl --version
	I0501 02:19:30.768517   27302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:19:30.774488   27302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:19:30.774528   27302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:19:30.785790   27302 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0501 02:19:30.785800   27302 start.go:494] detecting cgroup driver to use...
	I0501 02:19:30.785853   27302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:19:30.802226   27302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:19:30.816978   27302 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:19:30.817019   27302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:19:30.831597   27302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:19:30.845771   27302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:19:30.985885   27302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:19:31.138510   27302 docker.go:233] disabling docker service ...
	I0501 02:19:31.138553   27302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:19:31.160797   27302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:19:31.182214   27302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:19:31.342922   27302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:19:31.527687   27302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:19:31.546399   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:19:31.568500   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:19:31.580338   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:19:31.601655   27302 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:19:31.601733   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:19:31.615894   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:19:31.627888   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:19:31.639148   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:19:31.650308   27302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:19:31.661624   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:19:31.672388   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:19:31.684388   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:19:31.696664   27302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:19:31.706404   27302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:19:31.719548   27302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:19:31.869704   27302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:19:31.907722   27302 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0501 02:19:31.907783   27302 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0501 02:19:31.913070   27302 retry.go:31] will retry after 832.519029ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0501 02:19:32.746089   27302 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0501 02:19:32.751634   27302 start.go:562] Will wait 60s for crictl version
	I0501 02:19:32.751676   27302 ssh_runner.go:195] Run: which crictl
	I0501 02:19:32.756086   27302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:19:32.791299   27302 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.15
	RuntimeApiVersion:  v1
	I0501 02:19:32.791343   27302 ssh_runner.go:195] Run: containerd --version
	I0501 02:19:32.818691   27302 ssh_runner.go:195] Run: containerd --version
	I0501 02:19:32.851005   27302 out.go:177] * Preparing Kubernetes v1.30.0 on containerd 1.7.15 ...
	I0501 02:19:32.852228   27302 main.go:141] libmachine: (functional-167406) Calling .GetIP
	I0501 02:19:32.854728   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:32.855035   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:32.855053   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:32.855235   27302 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:19:32.861249   27302 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0501 02:19:32.862435   27302 kubeadm.go:877] updating cluster {Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:19:32.862527   27302 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0501 02:19:32.862574   27302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:19:32.903073   27302 containerd.go:627] all images are preloaded for containerd runtime.
	I0501 02:19:32.903097   27302 containerd.go:534] Images already preloaded, skipping extraction
	I0501 02:19:32.903148   27302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:19:32.943554   27302 containerd.go:627] all images are preloaded for containerd runtime.
	I0501 02:19:32.943565   27302 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:19:32.943572   27302 kubeadm.go:928] updating node { 192.168.39.209 8441 v1.30.0 containerd true true} ...
	I0501 02:19:32.943699   27302 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-167406 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:19:32.943758   27302 ssh_runner.go:195] Run: sudo crictl info
	I0501 02:19:32.986793   27302 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0501 02:19:32.986807   27302 cni.go:84] Creating CNI manager for ""
	I0501 02:19:32.986815   27302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:19:32.986822   27302 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:19:32.986839   27302 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.209 APIServerPort:8441 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-167406 NodeName:functional-167406 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubele
tConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:19:32.986939   27302 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.209
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-167406"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:19:32.986990   27302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:19:32.997857   27302 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:19:32.997921   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 02:19:33.010461   27302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0501 02:19:33.034391   27302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:19:33.056601   27302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2027 bytes)
	I0501 02:19:33.076127   27302 ssh_runner.go:195] Run: grep 192.168.39.209	control-plane.minikube.internal$ /etc/hosts
	I0501 02:19:33.080439   27302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:19:33.231190   27302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:19:33.249506   27302 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406 for IP: 192.168.39.209
	I0501 02:19:33.249520   27302 certs.go:194] generating shared ca certs ...
	I0501 02:19:33.249539   27302 certs.go:226] acquiring lock for ca certs: {Name:mk634f0288fd77df2d93a075894d5fc692d45f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:19:33.249720   27302 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13407/.minikube/ca.key
	I0501 02:19:33.249779   27302 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13407/.minikube/proxy-client-ca.key
	I0501 02:19:33.249786   27302 certs.go:256] generating profile certs ...
	I0501 02:19:33.249895   27302 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.key
	I0501 02:19:33.249952   27302 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/apiserver.key.2355bc77
	I0501 02:19:33.249982   27302 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/proxy-client.key
	I0501 02:19:33.250137   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/20785.pem (1338 bytes)
	W0501 02:19:33.250169   27302 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13407/.minikube/certs/20785_empty.pem, impossibly tiny 0 bytes
	I0501 02:19:33.250176   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:19:33.250203   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:19:33.250219   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:19:33.250238   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/key.pem (1675 bytes)
	I0501 02:19:33.250275   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem (1708 bytes)
	I0501 02:19:33.250965   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:19:33.278798   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0501 02:19:33.304380   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:19:33.331231   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:19:33.359390   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:19:33.387189   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:19:33.416881   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:19:33.444381   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:19:33.472023   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem --> /usr/share/ca-certificates/207852.pem (1708 bytes)
	I0501 02:19:33.498072   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:19:33.526348   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/certs/20785.pem --> /usr/share/ca-certificates/20785.pem (1338 bytes)
	I0501 02:19:33.554617   27302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:19:33.573705   27302 ssh_runner.go:195] Run: openssl version
	I0501 02:19:33.579942   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207852.pem && ln -fs /usr/share/ca-certificates/207852.pem /etc/ssl/certs/207852.pem"
	I0501 02:19:33.593246   27302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207852.pem
	I0501 02:19:33.598779   27302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:16 /usr/share/ca-certificates/207852.pem
	I0501 02:19:33.598808   27302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207852.pem
	I0501 02:19:33.605547   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207852.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:19:33.616599   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:19:33.629515   27302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:19:33.634618   27302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:19:33.634659   27302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:19:33.640675   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:19:33.651055   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20785.pem && ln -fs /usr/share/ca-certificates/20785.pem /etc/ssl/certs/20785.pem"
	I0501 02:19:33.664114   27302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20785.pem
	I0501 02:19:33.668922   27302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:16 /usr/share/ca-certificates/20785.pem
	I0501 02:19:33.668962   27302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20785.pem
	I0501 02:19:33.675578   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20785.pem /etc/ssl/certs/51391683.0"
	I0501 02:19:33.686506   27302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:19:33.691387   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 02:19:33.697494   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 02:19:33.703759   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 02:19:33.710510   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 02:19:33.716307   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 02:19:33.722636   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 02:19:33.728495   27302 kubeadm.go:391] StartCluster: {Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:19:33.728590   27302 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0501 02:19:33.728619   27302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:19:33.769657   27302 cri.go:89] found id: "52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d"
	I0501 02:19:33.769669   27302 cri.go:89] found id: "ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54"
	I0501 02:19:33.769673   27302 cri.go:89] found id: "939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3"
	I0501 02:19:33.769676   27302 cri.go:89] found id: "a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339"
	I0501 02:19:33.769679   27302 cri.go:89] found id: "f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f"
	I0501 02:19:33.769682   27302 cri.go:89] found id: "5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75"
	I0501 02:19:33.769685   27302 cri.go:89] found id: "c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2"
	I0501 02:19:33.769688   27302 cri.go:89] found id: "281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84"
	I0501 02:19:33.769690   27302 cri.go:89] found id: "5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6"
	I0501 02:19:33.769703   27302 cri.go:89] found id: "6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e"
	I0501 02:19:33.769706   27302 cri.go:89] found id: "fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0"
	I0501 02:19:33.769709   27302 cri.go:89] found id: "09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2"
	I0501 02:19:33.769712   27302 cri.go:89] found id: "1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a"
	I0501 02:19:33.769715   27302 cri.go:89] found id: "5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb"
	I0501 02:19:33.769721   27302 cri.go:89] found id: ""
	I0501 02:19:33.769764   27302 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0501 02:19:33.796543   27302 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","pid":1600,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a/rootfs","created":"2024-05-01T02:17:54.261537993Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xbtf9_049ec84e-c877-484d-b1b1-328156fb477d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-xbtf9","io.kubernetes.cri.sand
box-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"049ec84e-c877-484d-b1b1-328156fb477d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","pid":1692,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261/rootfs","created":"2024-05-01T02:17:54.490978244Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-7db6d8ff4d-xv8bs_ecdc231e-5cfc-4826-9956-e1270e6e9390","io.kubernetes.cri.sandbox-memory":"178257920"
,"io.kubernetes.cri.sandbox-name":"coredns-7db6d8ff4d-xv8bs","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ecdc231e-5cfc-4826-9956-e1270e6e9390"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d","pid":3131,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d/rootfs","created":"2024-05-01T02:18:58.937492084Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.30.0","io.kubernetes.cri.sandbox-id":"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-167406","io.kubernetes.cri.sandbox-namespac
e":"kube-system","io.kubernetes.cri.sandbox-uid":"f9f7ede5128b64464fffeeb6b7a159f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75","pid":2691,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75/rootfs","created":"2024-05-01T02:18:46.237845172Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.11.1","io.kubernetes.cri.sandbox-id":"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","io.kubernetes.cri.sandbox-name":"coredns-7db6d8ff4d-xv8bs","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ecdc231e-5cfc-4826-9956-e1270e6e9390"},"owner":"root"},{"ociVers
ion":"1.0.2-dev","id":"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","pid":1045,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871/rootfs","created":"2024-05-01T02:17:34.561663829Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-167406_f9f7ede5128b64464fffeeb6b7a159f5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kuber
netes.cri.sandbox-uid":"f9f7ede5128b64464fffeeb6b7a159f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3","pid":2847,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3/rootfs","created":"2024-05-01T02:18:47.246522059Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.30.0","io.kubernetes.cri.sandbox-id":"a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"81f155d75e2d0f03623586cc74d3e9ec"},"owner":"root"},{"ociVersion":"1.0.2-dev"
,"id":"a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339","pid":2854,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339/rootfs","created":"2024-05-01T02:18:47.241651104Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.12-0","io.kubernetes.cri.sandbox-id":"bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd","io.kubernetes.cri.sandbox-name":"etcd-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fb03fdcce11d87d827499069eedf6b25"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","pid":1053,"status":"running","bundle":"/run/containerd/
io.containerd.runtime.v2.task/k8s.io/a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5/rootfs","created":"2024-05-01T02:17:34.593330575Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-167406_81f155d75e2d0f03623586cc74d3e9ec","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"81f155d75e2d0f03623586cc74d3e9ec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdca39c10acda1333c53e0b90122acff31f3c7
81b1a1153e1efe95bb97bb53fd","pid":1046,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd/rootfs","created":"2024-05-01T02:17:34.564975441Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-167406_fb03fdcce11d87d827499069eedf6b25","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fb03fdcce11d87d827499069eedf6b25"},"owner":"root"},{"ociV
ersion":"1.0.2-dev","id":"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2","pid":2601,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2/rootfs","created":"2024-05-01T02:18:41.150171135Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4b8999c0-090e-491d-9b39-9b6e98af676a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","pid":19
18,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed/rootfs","created":"2024-05-01T02:17:55.32966664Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4b8999c0-090e-491d-9b39-9b6e98af676a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4b8999c0-090e-491d-9b39-9b6e98af676a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebe11aa9f88
04bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54","pid":3129,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54/rootfs","created":"2024-05-01T02:18:58.939294661Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.30.0","io.kubernetes.cri.sandbox-id":"fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"adcc40a72911f3d774df393212cbb315"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f","pid":2848,"status
":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f/rootfs","created":"2024-05-01T02:18:47.226546521Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.30.0","io.kubernetes.cri.sandbox-id":"13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","io.kubernetes.cri.sandbox-name":"kube-proxy-xbtf9","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"049ec84e-c877-484d-b1b1-328156fb477d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf","pid":1033,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fec06a36743b8d1ce78158fb3e875904d2672f3d46e78
b859736a76389034aaf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf/rootfs","created":"2024-05-01T02:17:34.531465578Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-167406_adcc40a72911f3d774df393212cbb315","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"adcc40a72911f3d774df393212cbb315"},"owner":"root"}]
	I0501 02:19:33.796872   27302 cri.go:126] list returned 14 containers
	I0501 02:19:33.796882   27302 cri.go:129] container: {ID:13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a Status:running}
	I0501 02:19:33.796896   27302 cri.go:131] skipping 13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a - not in ps
	I0501 02:19:33.796901   27302 cri.go:129] container: {ID:2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261 Status:running}
	I0501 02:19:33.796908   27302 cri.go:131] skipping 2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261 - not in ps
	I0501 02:19:33.796912   27302 cri.go:129] container: {ID:52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d Status:running}
	I0501 02:19:33.796920   27302 cri.go:135] skipping {52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d running}: state = "running", want "paused"
	I0501 02:19:33.796928   27302 cri.go:129] container: {ID:5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 Status:running}
	I0501 02:19:33.796935   27302 cri.go:135] skipping {5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 running}: state = "running", want "paused"
	I0501 02:19:33.796940   27302 cri.go:129] container: {ID:88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 Status:running}
	I0501 02:19:33.796948   27302 cri.go:131] skipping 88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 - not in ps
	I0501 02:19:33.796952   27302 cri.go:129] container: {ID:939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 Status:running}
	I0501 02:19:33.796959   27302 cri.go:135] skipping {939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 running}: state = "running", want "paused"
	I0501 02:19:33.796964   27302 cri.go:129] container: {ID:a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 Status:running}
	I0501 02:19:33.796968   27302 cri.go:135] skipping {a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 running}: state = "running", want "paused"
	I0501 02:19:33.796971   27302 cri.go:129] container: {ID:a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5 Status:running}
	I0501 02:19:33.796974   27302 cri.go:131] skipping a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5 - not in ps
	I0501 02:19:33.796976   27302 cri.go:129] container: {ID:bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd Status:running}
	I0501 02:19:33.796979   27302 cri.go:131] skipping bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd - not in ps
	I0501 02:19:33.796981   27302 cri.go:129] container: {ID:c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 Status:running}
	I0501 02:19:33.796985   27302 cri.go:135] skipping {c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 running}: state = "running", want "paused"
	I0501 02:19:33.796987   27302 cri.go:129] container: {ID:d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed Status:running}
	I0501 02:19:33.796991   27302 cri.go:131] skipping d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed - not in ps
	I0501 02:19:33.796993   27302 cri.go:129] container: {ID:ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 Status:running}
	I0501 02:19:33.796996   27302 cri.go:135] skipping {ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 running}: state = "running", want "paused"
	I0501 02:19:33.796999   27302 cri.go:129] container: {ID:f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f Status:running}
	I0501 02:19:33.797002   27302 cri.go:135] skipping {f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f running}: state = "running", want "paused"
	I0501 02:19:33.797010   27302 cri.go:129] container: {ID:fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf Status:running}
	I0501 02:19:33.797014   27302 cri.go:131] skipping fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf - not in ps
	I0501 02:19:33.797056   27302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 02:19:33.809208   27302 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 02:19:33.809215   27302 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 02:19:33.809218   27302 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 02:19:33.809251   27302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 02:19:33.820117   27302 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:19:33.820734   27302 kubeconfig.go:125] found "functional-167406" server: "https://192.168.39.209:8441"
	I0501 02:19:33.822281   27302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 02:19:33.833529   27302 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0501 02:19:33.833560   27302 kubeadm.go:1154] stopping kube-system containers ...
	I0501 02:19:33.833570   27302 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0501 02:19:33.833602   27302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:19:33.876099   27302 cri.go:89] found id: "52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d"
	I0501 02:19:33.876109   27302 cri.go:89] found id: "ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54"
	I0501 02:19:33.876112   27302 cri.go:89] found id: "939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3"
	I0501 02:19:33.876114   27302 cri.go:89] found id: "a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339"
	I0501 02:19:33.876121   27302 cri.go:89] found id: "f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f"
	I0501 02:19:33.876123   27302 cri.go:89] found id: "5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75"
	I0501 02:19:33.876125   27302 cri.go:89] found id: "c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2"
	I0501 02:19:33.876126   27302 cri.go:89] found id: "281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84"
	I0501 02:19:33.876128   27302 cri.go:89] found id: "5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6"
	I0501 02:19:33.876132   27302 cri.go:89] found id: "6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e"
	I0501 02:19:33.876133   27302 cri.go:89] found id: "fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0"
	I0501 02:19:33.876135   27302 cri.go:89] found id: "09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2"
	I0501 02:19:33.876137   27302 cri.go:89] found id: "1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a"
	I0501 02:19:33.876138   27302 cri.go:89] found id: "5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb"
	I0501 02:19:33.876143   27302 cri.go:89] found id: ""
	I0501 02:19:33.876147   27302 cri.go:234] Stopping containers: [52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb]
	I0501 02:19:33.876187   27302 ssh_runner.go:195] Run: which crictl
	I0501 02:19:33.880970   27302 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb
	I0501 02:19:49.400461   27302 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb: (15.5
19438939s)
	W0501 02:19:49.400521   27302 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e
2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb: Process exited with status 1
	stdout:
	52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d
	ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54
	939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3
	a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339
	f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f
	5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75
	
	stderr:
	E0501 02:19:49.373801    3825 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2\": not found" containerID="c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2"
	time="2024-05-01T02:19:49Z" level=fatal msg="stopping the container \"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2\": not found"
	I0501 02:19:49.400576   27302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 02:19:49.442489   27302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:19:49.453682   27302 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 May  1 02:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 May  1 02:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 May  1 02:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 May  1 02:18 /etc/kubernetes/scheduler.conf
	
	I0501 02:19:49.453722   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0501 02:19:49.463450   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0501 02:19:49.473268   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0501 02:19:49.482593   27302 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:19:49.482620   27302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:19:49.492406   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0501 02:19:49.501589   27302 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:19:49.501621   27302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:19:49.511385   27302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:19:49.521299   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:49.576852   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.275401   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.501617   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.586395   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.669276   27302 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:19:50.669347   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:19:51.169802   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:19:51.670333   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:19:51.687969   27302 api_server.go:72] duration metric: took 1.018693775s to wait for apiserver process to appear ...
	I0501 02:19:51.687984   27302 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:19:51.688003   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:52.986291   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 02:19:52.986313   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 02:19:52.986323   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:53.043640   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:53.043668   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:53.188909   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:53.193215   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:53.193230   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:53.688857   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:53.693916   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:53.693934   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:54.188678   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:54.205628   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:54.205654   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:54.688294   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:54.692103   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 200:
	ok
	I0501 02:19:54.698200   27302 api_server.go:141] control plane version: v1.30.0
	I0501 02:19:54.698212   27302 api_server.go:131] duration metric: took 3.010224858s to wait for apiserver health ...
	I0501 02:19:54.698218   27302 cni.go:84] Creating CNI manager for ""
	I0501 02:19:54.698223   27302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:19:54.699989   27302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 02:19:54.701380   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 02:19:54.716172   27302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 02:19:54.741211   27302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:19:54.757093   27302 system_pods.go:59] 7 kube-system pods found
	I0501 02:19:54.757117   27302 system_pods.go:61] "coredns-7db6d8ff4d-xv8bs" [ecdc231e-5cfc-4826-9956-e1270e6e9390] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 02:19:54.757122   27302 system_pods.go:61] "etcd-functional-167406" [c756611c-5955-4eb6-9e66-555a18726767] Running
	I0501 02:19:54.757130   27302 system_pods.go:61] "kube-apiserver-functional-167406" [4cd1e668-c6c5-42d0-8eff-11d1e7a37cb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 02:19:54.757141   27302 system_pods.go:61] "kube-controller-manager-functional-167406" [753f721a-d8f9-4aae-a8e5-42e47750f595] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 02:19:54.757148   27302 system_pods.go:61] "kube-proxy-xbtf9" [049ec84e-c877-484d-b1b1-328156fb477d] Running
	I0501 02:19:54.757156   27302 system_pods.go:61] "kube-scheduler-functional-167406" [d249cb29-5a87-45f6-90fa-4b962d7394b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 02:19:54.757162   27302 system_pods.go:61] "storage-provisioner" [4b8999c0-090e-491d-9b39-9b6e98af676a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 02:19:54.757168   27302 system_pods.go:74] duration metric: took 15.946257ms to wait for pod list to return data ...
	I0501 02:19:54.757176   27302 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:19:54.760302   27302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:19:54.760318   27302 node_conditions.go:123] node cpu capacity is 2
	I0501 02:19:54.760328   27302 node_conditions.go:105] duration metric: took 3.147862ms to run NodePressure ...
	I0501 02:19:54.760346   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:55.029033   27302 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 02:19:55.034633   27302 kubeadm.go:733] kubelet initialised
	I0501 02:19:55.034651   27302 kubeadm.go:734] duration metric: took 5.595558ms waiting for restarted kubelet to initialise ...
	I0501 02:19:55.034659   27302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:19:55.045035   27302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:19:57.051415   27302 pod_ready.go:102] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"False"
	I0501 02:19:59.054146   27302 pod_ready.go:102] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"False"
	I0501 02:20:01.552035   27302 pod_ready.go:102] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"False"
	I0501 02:20:03.052650   27302 pod_ready.go:92] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:03.052662   27302 pod_ready.go:81] duration metric: took 8.007609985s for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:03.052668   27302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:03.058012   27302 pod_ready.go:92] pod "etcd-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:03.058023   27302 pod_ready.go:81] duration metric: took 5.349333ms for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:03.058033   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:05.064872   27302 pod_ready.go:102] pod "kube-apiserver-functional-167406" in "kube-system" namespace has status "Ready":"False"
	I0501 02:20:05.565939   27302 pod_ready.go:92] pod "kube-apiserver-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:05.565953   27302 pod_ready.go:81] duration metric: took 2.507911806s for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:05.565964   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.072548   27302 pod_ready.go:92] pod "kube-controller-manager-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.072562   27302 pod_ready.go:81] duration metric: took 506.587642ms for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.072570   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.077468   27302 pod_ready.go:92] pod "kube-proxy-xbtf9" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.077475   27302 pod_ready.go:81] duration metric: took 4.901001ms for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.077482   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.082661   27302 pod_ready.go:92] pod "kube-scheduler-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.082667   27302 pod_ready.go:81] duration metric: took 5.180679ms for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.082673   27302 pod_ready.go:38] duration metric: took 11.048005881s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:20:06.082686   27302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:20:06.096020   27302 ops.go:34] apiserver oom_adj: -16
	I0501 02:20:06.096030   27302 kubeadm.go:591] duration metric: took 32.286806378s to restartPrimaryControlPlane
	I0501 02:20:06.096037   27302 kubeadm.go:393] duration metric: took 32.367551096s to StartCluster
	I0501 02:20:06.096053   27302 settings.go:142] acquiring lock: {Name:mk5412669f58875b6a0bd1d6a1dcb2e935592f4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:20:06.096132   27302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:20:06.096736   27302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13407/kubeconfig: {Name:mk4670d16c1b854bc97e144ac00ddd58ecc61c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:20:06.096929   27302 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0501 02:20:06.098607   27302 out.go:177] * Verifying Kubernetes components...
	I0501 02:20:06.097009   27302 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:20:06.098632   27302 addons.go:69] Setting storage-provisioner=true in profile "functional-167406"
	I0501 02:20:06.099827   27302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:20:06.099852   27302 addons.go:234] Setting addon storage-provisioner=true in "functional-167406"
	W0501 02:20:06.099860   27302 addons.go:243] addon storage-provisioner should already be in state true
	I0501 02:20:06.099881   27302 host.go:66] Checking if "functional-167406" exists ...
	I0501 02:20:06.097108   27302 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:20:06.098644   27302 addons.go:69] Setting default-storageclass=true in profile "functional-167406"
	I0501 02:20:06.099986   27302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-167406"
	I0501 02:20:06.100179   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.100220   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.100306   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.100341   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.114376   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44137
	I0501 02:20:06.114748   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.115211   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.115227   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.115351   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0501 02:20:06.115569   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.115713   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.115765   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:20:06.116239   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.116255   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.116544   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.117096   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.117132   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.118355   27302 addons.go:234] Setting addon default-storageclass=true in "functional-167406"
	W0501 02:20:06.118363   27302 addons.go:243] addon default-storageclass should already be in state true
	I0501 02:20:06.118386   27302 host.go:66] Checking if "functional-167406" exists ...
	I0501 02:20:06.118724   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.118757   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.132056   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0501 02:20:06.132367   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.132796   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.132824   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.133092   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.133652   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.133687   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.135199   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0501 02:20:06.135589   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.136121   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.136138   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.136403   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.136599   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:20:06.138120   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:06.140321   27302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:20:06.141799   27302 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:20:06.141809   27302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:20:06.141830   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:20:06.144487   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.144874   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:20:06.144901   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.145049   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:20:06.145233   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:20:06.145425   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:20:06.145550   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:20:06.148575   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0501 02:20:06.148910   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.149344   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.149353   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.149639   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.149825   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:20:06.151057   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:06.151309   27302 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:20:06.151318   27302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:20:06.151332   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:20:06.153814   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.154212   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:20:06.154230   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.154354   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:20:06.154522   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:20:06.154665   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:20:06.154784   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:20:06.291969   27302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:20:06.310462   27302 node_ready.go:35] waiting up to 6m0s for node "functional-167406" to be "Ready" ...
	I0501 02:20:06.314577   27302 node_ready.go:49] node "functional-167406" has status "Ready":"True"
	I0501 02:20:06.314587   27302 node_ready.go:38] duration metric: took 4.105122ms for node "functional-167406" to be "Ready" ...
	I0501 02:20:06.314595   27302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:20:06.320143   27302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.392851   27302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:20:06.403455   27302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:20:06.650181   27302 pod_ready.go:92] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.650195   27302 pod_ready.go:81] duration metric: took 330.040348ms for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.650206   27302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.049853   27302 pod_ready.go:92] pod "etcd-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:07.049864   27302 pod_ready.go:81] duration metric: took 399.652977ms for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.049873   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.068039   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068053   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.068102   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068112   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.068321   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.068325   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.068330   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.068335   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.068343   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068345   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068350   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.068352   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.069878   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.069888   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.069896   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.069905   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.070002   27302 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
	I0501 02:20:07.070009   27302 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
	I0501 02:20:07.079813   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.079823   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.080103   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.080112   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.082343   27302 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:20:07.083683   27302 addons.go:505] duration metric: took 986.687248ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:20:07.449877   27302 pod_ready.go:92] pod "kube-apiserver-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:07.449897   27302 pod_ready.go:81] duration metric: took 400.018258ms for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.449908   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.849418   27302 pod_ready.go:92] pod "kube-controller-manager-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:07.849429   27302 pod_ready.go:81] duration metric: took 399.514247ms for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.849437   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.249116   27302 pod_ready.go:92] pod "kube-proxy-xbtf9" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:08.249126   27302 pod_ready.go:81] duration metric: took 399.68419ms for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.249134   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.662879   27302 pod_ready.go:92] pod "kube-scheduler-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:08.662889   27302 pod_ready.go:81] duration metric: took 413.749499ms for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.662897   27302 pod_ready.go:38] duration metric: took 2.348293104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:20:08.662908   27302 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:20:08.662954   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:20:08.693543   27302 api_server.go:72] duration metric: took 2.596595813s to wait for apiserver process to appear ...
	I0501 02:20:08.693556   27302 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:20:08.693579   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:20:08.712207   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 200:
	ok
	I0501 02:20:08.713171   27302 api_server.go:141] control plane version: v1.30.0
	I0501 02:20:08.713188   27302 api_server.go:131] duration metric: took 19.62622ms to wait for apiserver health ...
	I0501 02:20:08.713196   27302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:20:08.853696   27302 system_pods.go:59] 7 kube-system pods found
	I0501 02:20:08.853712   27302 system_pods.go:61] "coredns-7db6d8ff4d-xv8bs" [ecdc231e-5cfc-4826-9956-e1270e6e9390] Running
	I0501 02:20:08.853718   27302 system_pods.go:61] "etcd-functional-167406" [c756611c-5955-4eb6-9e66-555a18726767] Running
	I0501 02:20:08.853722   27302 system_pods.go:61] "kube-apiserver-functional-167406" [4cd1e668-c6c5-42d0-8eff-11d1e7a37cb5] Running
	I0501 02:20:08.853726   27302 system_pods.go:61] "kube-controller-manager-functional-167406" [753f721a-d8f9-4aae-a8e5-42e47750f595] Running
	I0501 02:20:08.853730   27302 system_pods.go:61] "kube-proxy-xbtf9" [049ec84e-c877-484d-b1b1-328156fb477d] Running
	I0501 02:20:08.853732   27302 system_pods.go:61] "kube-scheduler-functional-167406" [d249cb29-5a87-45f6-90fa-4b962d7394b6] Running
	I0501 02:20:08.853736   27302 system_pods.go:61] "storage-provisioner" [4b8999c0-090e-491d-9b39-9b6e98af676a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 02:20:08.853743   27302 system_pods.go:74] duration metric: took 140.541233ms to wait for pod list to return data ...
	I0501 02:20:08.853752   27302 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:20:09.049668   27302 default_sa.go:45] found service account: "default"
	I0501 02:20:09.049681   27302 default_sa.go:55] duration metric: took 195.92317ms for default service account to be created ...
	I0501 02:20:09.049690   27302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:20:09.255439   27302 system_pods.go:86] 7 kube-system pods found
	I0501 02:20:09.255454   27302 system_pods.go:89] "coredns-7db6d8ff4d-xv8bs" [ecdc231e-5cfc-4826-9956-e1270e6e9390] Running
	I0501 02:20:09.255460   27302 system_pods.go:89] "etcd-functional-167406" [c756611c-5955-4eb6-9e66-555a18726767] Running
	I0501 02:20:09.255466   27302 system_pods.go:89] "kube-apiserver-functional-167406" [4cd1e668-c6c5-42d0-8eff-11d1e7a37cb5] Running
	I0501 02:20:09.255471   27302 system_pods.go:89] "kube-controller-manager-functional-167406" [753f721a-d8f9-4aae-a8e5-42e47750f595] Running
	I0501 02:20:09.255475   27302 system_pods.go:89] "kube-proxy-xbtf9" [049ec84e-c877-484d-b1b1-328156fb477d] Running
	I0501 02:20:09.255478   27302 system_pods.go:89] "kube-scheduler-functional-167406" [d249cb29-5a87-45f6-90fa-4b962d7394b6] Running
	I0501 02:20:09.255485   27302 system_pods.go:89] "storage-provisioner" [4b8999c0-090e-491d-9b39-9b6e98af676a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 02:20:09.255492   27302 system_pods.go:126] duration metric: took 205.797561ms to wait for k8s-apps to be running ...
	I0501 02:20:09.255501   27302 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:20:09.255557   27302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:20:09.275685   27302 system_svc.go:56] duration metric: took 20.175711ms WaitForService to wait for kubelet
	I0501 02:20:09.275704   27302 kubeadm.go:576] duration metric: took 3.178756744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:20:09.275720   27302 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:20:09.449853   27302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:20:09.449866   27302 node_conditions.go:123] node cpu capacity is 2
	I0501 02:20:09.449874   27302 node_conditions.go:105] duration metric: took 174.150822ms to run NodePressure ...
	I0501 02:20:09.449883   27302 start.go:240] waiting for startup goroutines ...
	I0501 02:20:09.449889   27302 start.go:245] waiting for cluster config update ...
	I0501 02:20:09.449897   27302 start.go:254] writing updated cluster config ...
	I0501 02:20:09.450124   27302 ssh_runner.go:195] Run: rm -f paused
	I0501 02:20:09.497259   27302 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:20:09.499251   27302 out.go:177] * Done! kubectl is now configured to use "functional-167406" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ae6f4e38ab4f3       6e38f40d628db       19 seconds ago       Running             storage-provisioner       4                   d3f41e0f975da       storage-provisioner
	ef9868f7ee3c3       cbb01a7bd410d       34 seconds ago       Running             coredns                   2                   2132b99b3eb2c       coredns-7db6d8ff4d-xv8bs
	b8e78e9b1aa3a       6e38f40d628db       34 seconds ago       Exited              storage-provisioner       3                   d3f41e0f975da       storage-provisioner
	429a24a39fec5       c42f13656d0b2       37 seconds ago       Running             kube-apiserver            2                   88ce05d0d4379       kube-apiserver-functional-167406
	350765a60a825       c7aad43836fa5       37 seconds ago       Running             kube-controller-manager   2                   fec06a36743b8       kube-controller-manager-functional-167406
	a513f3286b775       259c8277fcbbc       44 seconds ago       Running             kube-scheduler            2                   a3c933aaaf5a9       kube-scheduler-functional-167406
	3b377dde86d26       3861cfcd7c04c       44 seconds ago       Running             etcd                      2                   bdca39c10acda       etcd-functional-167406
	6df6abb34b88d       a0bf559e280cf       44 seconds ago       Running             kube-proxy                2                   13168bbfbe961       kube-proxy-xbtf9
	52ce55f010233       c42f13656d0b2       About a minute ago   Exited              kube-apiserver            1                   88ce05d0d4379       kube-apiserver-functional-167406
	ebe11aa9f8804       c7aad43836fa5       About a minute ago   Exited              kube-controller-manager   1                   fec06a36743b8       kube-controller-manager-functional-167406
	939e53f1e1db0       259c8277fcbbc       About a minute ago   Exited              kube-scheduler            1                   a3c933aaaf5a9       kube-scheduler-functional-167406
	a1f43ae8da4b3       3861cfcd7c04c       About a minute ago   Exited              etcd                      1                   bdca39c10acda       etcd-functional-167406
	f0dc76865d087       a0bf559e280cf       About a minute ago   Exited              kube-proxy                1                   13168bbfbe961       kube-proxy-xbtf9
	5652211ff7b29       cbb01a7bd410d       About a minute ago   Exited              coredns                   1                   2132b99b3eb2c       coredns-7db6d8ff4d-xv8bs
	
	
	==> containerd <==
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.874354592Z" level=info msg="CreateContainer within sandbox \"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:3,}"
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.914443000Z" level=info msg="CreateContainer within sandbox \"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed\" for &ContainerMetadata{Name:storage-provisioner,Attempt:3,} returns container id \"b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965\""
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.914946703Z" level=info msg="StartContainer for \"b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965\""
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.926616911Z" level=info msg="CreateContainer within sandbox \"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261\" for &ContainerMetadata{Name:coredns,Attempt:2,} returns container id \"ef9868f7ee3c37e5d0905ec5f86a854f4d72fd6fa06197f96f693fcc6e53a485\""
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.927031332Z" level=info msg="StartContainer for \"ef9868f7ee3c37e5d0905ec5f86a854f4d72fd6fa06197f96f693fcc6e53a485\""
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.048753981Z" level=info msg="StartContainer for \"b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965\" returns successfully"
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.119825800Z" level=info msg="StartContainer for \"ef9868f7ee3c37e5d0905ec5f86a854f4d72fd6fa06197f96f693fcc6e53a485\" returns successfully"
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.142481068Z" level=info msg="shim disconnected" id=b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965 namespace=k8s.io
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.142643013Z" level=warning msg="cleaning up after shim disconnected" id=b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965 namespace=k8s.io
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.142773469Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.696328638Z" level=info msg="RemoveContainer for \"7aaa1a01414d1ba659b5c8289583d21c96e0824437226a7421dfa4ff22fa0fa5\""
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.708310102Z" level=info msg="RemoveContainer for \"7aaa1a01414d1ba659b5c8289583d21c96e0824437226a7421dfa4ff22fa0fa5\" returns successfully"
	May 01 02:20:08 functional-167406 containerd[3593]: time="2024-05-01T02:20:08.604591293Z" level=info msg="CreateContainer within sandbox \"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:4,}"
	May 01 02:20:08 functional-167406 containerd[3593]: time="2024-05-01T02:20:08.626050966Z" level=info msg="CreateContainer within sandbox \"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed\" for &ContainerMetadata{Name:storage-provisioner,Attempt:4,} returns container id \"ae6f4e38ab4f3bee5d7e47c976761288d60a681e7e951889c3578e892750495b\""
	May 01 02:20:08 functional-167406 containerd[3593]: time="2024-05-01T02:20:08.626735303Z" level=info msg="StartContainer for \"ae6f4e38ab4f3bee5d7e47c976761288d60a681e7e951889c3578e892750495b\""
	May 01 02:20:08 functional-167406 containerd[3593]: time="2024-05-01T02:20:08.749783208Z" level=info msg="StartContainer for \"ae6f4e38ab4f3bee5d7e47c976761288d60a681e7e951889c3578e892750495b\" returns successfully"
	May 01 02:20:10 functional-167406 containerd[3593]: time="2024-05-01T02:20:10.555611067Z" level=info msg="StopContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" with timeout 30 (s)"
	May 01 02:20:10 functional-167406 containerd[3593]: time="2024-05-01T02:20:10.556163529Z" level=info msg="Stop container \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" with signal terminated"
	May 01 02:20:19 functional-167406 containerd[3593]: time="2024-05-01T02:20:19.351055932Z" level=info msg="ImageCreate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:19 functional-167406 containerd[3593]: time="2024-05-01T02:20:19.359428272Z" level=info msg="ImageCreate event name:\"sha256:b08046378d77c9dfdab5fbe738244949bc9d487d7b394813b7209ff1f43b82cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:19 functional-167406 containerd[3593]: time="2024-05-01T02:20:19.359835265Z" level=info msg="ImageUpdate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:28 functional-167406 containerd[3593]: time="2024-05-01T02:20:28.502213900Z" level=info msg="RemoveImage \"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:28 functional-167406 containerd[3593]: time="2024-05-01T02:20:28.505480567Z" level=info msg="ImageDelete event name:\"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:28 functional-167406 containerd[3593]: time="2024-05-01T02:20:28.507593923Z" level=info msg="ImageDelete event name:\"sha256:b08046378d77c9dfdab5fbe738244949bc9d487d7b394813b7209ff1f43b82cd\""
	May 01 02:20:28 functional-167406 containerd[3593]: time="2024-05-01T02:20:28.558902132Z" level=info msg="RemoveImage \"gcr.io/google-containers/addon-resizer:functional-167406\" returns successfully"
	
	
	==> coredns [5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43474 - 46251 "HINFO IN 6093638740258044659.1554125567718258750. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008772047s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: unknown (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: unknown (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ef9868f7ee3c37e5d0905ec5f86a854f4d72fd6fa06197f96f693fcc6e53a485] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51551 - 12396 "HINFO IN 7161565364375486857.4859467522399385342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006762819s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May 1 02:18] kauditd_printk_skb: 94 callbacks suppressed
	[ +32.076735] systemd-fstab-generator[2180]: Ignoring "noauto" option for root device
	[  +0.169403] systemd-fstab-generator[2192]: Ignoring "noauto" option for root device
	[  +0.211042] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[  +0.165983] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +0.323845] systemd-fstab-generator[2247]: Ignoring "noauto" option for root device
	[  +2.137091] systemd-fstab-generator[2452]: Ignoring "noauto" option for root device
	[  +0.094208] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.831325] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.516674] kauditd_printk_skb: 14 callbacks suppressed
	[  +1.457832] systemd-fstab-generator[3047]: Ignoring "noauto" option for root device
	[May 1 02:19] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.754628] systemd-fstab-generator[3215]: Ignoring "noauto" option for root device
	[ +14.125843] systemd-fstab-generator[3518]: Ignoring "noauto" option for root device
	[  +0.076849] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.077827] systemd-fstab-generator[3530]: Ignoring "noauto" option for root device
	[  +0.188600] systemd-fstab-generator[3544]: Ignoring "noauto" option for root device
	[  +0.171319] systemd-fstab-generator[3556]: Ignoring "noauto" option for root device
	[  +0.356766] systemd-fstab-generator[3585]: Ignoring "noauto" option for root device
	[  +1.365998] systemd-fstab-generator[3741]: Ignoring "noauto" option for root device
	[ +10.881538] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.346698] kauditd_printk_skb: 17 callbacks suppressed
	[  +1.027943] systemd-fstab-generator[4273]: Ignoring "noauto" option for root device
	[  +4.180252] kauditd_printk_skb: 36 callbacks suppressed
	[May 1 02:20] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	
	
	==> etcd [3b377dde86d267c8742b885c6b59382115c63d70d37c1823e0e1d10f97eff8b3] <==
	{"level":"info","ts":"2024-05-01T02:19:44.776714Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:19:44.77674Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:19:44.777129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b switched to configuration voters=(8441320971333687067)"}
	{"level":"info","ts":"2024-05-01T02:19:44.777351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","added-peer-id":"752598b30b66571b","added-peer-peer-urls":["https://192.168.39.209:2380"]}
	{"level":"info","ts":"2024-05-01T02:19:44.777547Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:19:44.777589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:19:44.781098Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T02:19:44.781692Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"752598b30b66571b","initial-advertise-peer-urls":["https://192.168.39.209:2380"],"listen-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.209:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T02:19:44.781836Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T02:19:44.782391Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.782447Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:46.149524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.152677Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:functional-167406 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T02:19:46.152701Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:19:46.152914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:19:46.153408Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T02:19:46.153471Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T02:19:46.155829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-05-01T02:19:46.156978Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339] <==
	{"level":"info","ts":"2024-05-01T02:18:47.383086Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:18:48.759417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.767118Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:18:48.767067Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:functional-167406 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T02:18:48.768075Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:18:48.768693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T02:18:48.768883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T02:18:48.769381Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-05-01T02:18:48.770832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T02:19:44.172843Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-01T02:19:44.172953Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-167406","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	{"level":"warn","ts":"2024-05-01T02:19:44.173117Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.17315Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.175169Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.175192Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-01T02:19:44.175362Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"752598b30b66571b","current-leader-member-id":"752598b30b66571b"}
	{"level":"info","ts":"2024-05-01T02:19:44.178843Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.179043Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.179065Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-167406","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	
	
	==> kernel <==
	 02:20:29 up 3 min,  0 users,  load average: 1.01, 0.52, 0.20
	Linux functional-167406 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5] <==
	I0501 02:20:10.588037       1 controller.go:167] Shutting down OpenAPI controller
	I0501 02:20:10.588047       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0501 02:20:10.588059       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0501 02:20:10.588069       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0501 02:20:10.588079       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0501 02:20:10.588085       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0501 02:20:10.588150       1 controller.go:129] Ending legacy_token_tracking_controller
	I0501 02:20:10.588186       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0501 02:20:10.592418       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0501 02:20:10.594973       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0501 02:20:10.595024       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0501 02:20:10.595151       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 02:20:10.595337       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:20:10.595353       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0501 02:20:10.595372       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0501 02:20:10.595406       1 secure_serving.go:258] Stopped listening on [::]:8441
	I0501 02:20:10.595418       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0501 02:20:10.595835       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 02:20:10.598324       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:20:10.601393       1 controller.go:157] Shutting down quota evaluator
	I0501 02:20:10.601407       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:20:10.601629       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:20:10.601638       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:20:10.601643       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:20:10.601647       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d] <==
	I0501 02:19:33.934728       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0501 02:19:33.935337       1 naming_controller.go:302] Shutting down NamingConditionController
	I0501 02:19:33.937005       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0501 02:19:33.937289       1 controller.go:167] Shutting down OpenAPI controller
	I0501 02:19:33.937419       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:19:33.937516       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 02:19:33.937619       1 controller.go:157] Shutting down quota evaluator
	I0501 02:19:33.937636       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:19:33.932967       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 02:19:33.932633       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0501 02:19:33.932975       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:19:33.933014       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0501 02:19:33.933029       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0501 02:19:33.933092       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0501 02:19:33.932643       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0501 02:19:33.932656       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0501 02:19:33.932665       1 establishing_controller.go:87] Shutting down EstablishingController
	I0501 02:19:33.932675       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0501 02:19:33.933038       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0501 02:19:33.933043       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0501 02:19:33.933081       1 secure_serving.go:258] Stopped listening on [::]:8441
	I0501 02:19:33.940207       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:19:33.941508       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:19:33.941544       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:19:33.942342       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [350765a60a82586dd2a69686a601b5d16ad68d05a64cd6e4d3359df1866500b5] <==
	I0501 02:20:05.548426       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 02:20:05.552106       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 02:20:05.557461       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 02:20:05.559188       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 02:20:05.560921       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 02:20:05.561099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.912µs"
	I0501 02:20:05.565885       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 02:20:05.569741       1 shared_informer.go:320] Caches are synced for service account
	I0501 02:20:05.578368       1 shared_informer.go:320] Caches are synced for HPA
	I0501 02:20:05.580839       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 02:20:05.583366       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 02:20:05.584712       1 shared_informer.go:320] Caches are synced for GC
	I0501 02:20:05.590141       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 02:20:05.596584       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 02:20:05.600223       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 02:20:05.602715       1 shared_informer.go:320] Caches are synced for job
	I0501 02:20:05.605865       1 shared_informer.go:320] Caches are synced for deployment
	I0501 02:20:05.608288       1 shared_informer.go:320] Caches are synced for disruption
	I0501 02:20:05.634366       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 02:20:05.663770       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 02:20:05.752163       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:20:05.763685       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:20:06.213812       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:20:06.228479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:20:06.228527       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54] <==
	I0501 02:19:13.936373       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 02:19:13.936390       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 02:19:13.940386       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 02:19:13.942716       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 02:19:13.946741       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 02:19:13.949349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.775495ms"
	I0501 02:19:13.950927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.553µs"
	I0501 02:19:13.969177       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 02:19:13.975817       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 02:19:13.985573       1 shared_informer.go:320] Caches are synced for TTL
	I0501 02:19:13.986878       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 02:19:13.991538       1 shared_informer.go:320] Caches are synced for node
	I0501 02:19:13.991869       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 02:19:13.992064       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 02:19:13.992201       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 02:19:13.992333       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 02:19:14.022008       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 02:19:14.035151       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 02:19:14.043403       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:19:14.068572       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:19:14.086442       1 shared_informer.go:320] Caches are synced for disruption
	I0501 02:19:14.135817       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 02:19:14.567440       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:19:14.602838       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:19:14.602885       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [6df6abb34b88dfeaae1f93d6a23cfc1748633884bc829df09c3047477d7f424c] <==
	I0501 02:19:44.730099       1 server_linux.go:69] "Using iptables proxy"
	E0501 02:19:44.732063       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:19:45.813700       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:19:47.982154       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	I0501 02:19:53.031359       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.209"]
	I0501 02:19:53.089991       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:19:53.090036       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:19:53.090052       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:19:53.094508       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:19:53.095319       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:19:53.095716       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:19:53.097123       1 config.go:192] "Starting service config controller"
	I0501 02:19:53.097468       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:19:53.097670       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:19:53.097907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:19:53.098658       1 config.go:319] "Starting node config controller"
	I0501 02:19:53.101299       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:19:53.198633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:19:53.198675       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:19:53.201407       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f] <==
	I0501 02:18:49.135475       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0501 02:18:49.135542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.135602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.135935       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.209:8441: connect: connection refused"
	W0501 02:18:49.960987       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.961201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:50.247414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:50.247829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:50.353906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:50.354334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.351893       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.352039       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.513544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.513603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.774168       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.774360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:55.789131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:55.789541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.962943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.962985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:58.352087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:58.352161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	I0501 02:19:06.033778       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:19:07.236470       1 shared_informer.go:320] Caches are synced for node config
	I0501 02:19:08.934441       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3] <==
	E0501 02:18:57.123850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.195323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.195395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.309765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.309834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.470763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.470798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.772512       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.772548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.804749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.804779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.886920       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.886982       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.929219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.929386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.978490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.978527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:58.311728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:58.311770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:00.939844       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:19:00.939973       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:19:01.688744       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0501 02:19:09.088531       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 02:19:12.088779       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	E0501 02:19:44.107636       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a513f3286b775a1c5c742fd0ac19b8fa8a6ee5129122ad75de1496bed6278d1f] <==
	W0501 02:19:49.143896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.143978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.351289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.351443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.596848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.209:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.596882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.209:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.654875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.654916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.674532       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.674621       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.791451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.791485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.859678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.859751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.074783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.074851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.174913       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.209:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.174963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.209:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.183651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.183678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.386329       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.386369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:52.969018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 02:19:52.970815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 02:19:54.216441       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.227612    4280 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.228840    4280 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.229739    4280 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.230538    4280 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.231593    4280 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: I0501 02:20:13.231641    4280 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.232115    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="200ms"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.433365    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="400ms"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.498599    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.499212    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.499916    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.500701    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.501362    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.501378    4280 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.834926    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="800ms"
	May 01 02:20:14 functional-167406 kubelet[4280]: E0501 02:20:14.637220    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="1.6s"
	May 01 02:20:16 functional-167406 kubelet[4280]: E0501 02:20:16.240104    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="3.2s"
	May 01 02:20:19 functional-167406 kubelet[4280]: E0501 02:20:19.442021    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="6.4s"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.757863    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.758851    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.759904    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.760825    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.761804    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.761910    4280 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 01 02:20:25 functional-167406 kubelet[4280]: E0501 02:20:25.843312    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="7s"
	
	
	==> storage-provisioner [ae6f4e38ab4f3bee5d7e47c976761288d60a681e7e951889c3578e892750495b] <==
	I0501 02:20:08.757073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 02:20:08.772588       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 02:20:08.772654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0501 02:20:12.228155       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:16.487066       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:20.083198       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:23.134350       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:26.154932       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965] <==
	I0501 02:19:54.061102       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0501 02:19:54.064135       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-167406 -n functional-167406
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-167406 -n functional-167406: exit status 2 (13.416900805s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-167406" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (29.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (27.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-167406 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-167406 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (49.913827ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.209:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-167406 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.209:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.209:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.209:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.209:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.209:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-167406 -n functional-167406
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-167406 -n functional-167406: exit status 2 (12.955891791s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 logs -n 25: (1.616997169s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| config  | functional-167406 config set                               | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | cpus 2                                                     |                   |         |         |                     |                     |
	| config  | functional-167406 config get                               | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | cpus                                                       |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo                                 | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | systemctl is-active crio                                   |                   |         |         |                     |                     |
	| config  | functional-167406 config unset                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | cpus                                                       |                   |         |         |                     |                     |
	| config  | functional-167406 config get                               | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | cpus                                                       |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/ssl/certs/20785.pem                                   |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /usr/share/ca-certificates/20785.pem                       |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/test/nested/copy/20785/hosts                          |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/ssl/certs/51391683.0                                  |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/ssl/certs/207852.pem                                  |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /usr/share/ca-certificates/207852.pem                      |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh sudo cat                             | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/ssl/certs/3ec20f2e.0                                  |                   |         |         |                     |                     |
	| cp      | functional-167406 cp                                       | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | testdata/cp-test.txt                                       |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                   |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh -n                                   | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | functional-167406 sudo cat                                 |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                   |                   |         |         |                     |                     |
	| cp      | functional-167406 cp                                       | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | functional-167406:/home/docker/cp-test.txt                 |                   |         |         |                     |                     |
	|         | /tmp/TestFunctionalparallelCpCmd2548486059/001/cp-test.txt |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh -n                                   | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | functional-167406 sudo cat                                 |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                   |                   |         |         |                     |                     |
	| cp      | functional-167406 cp                                       | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | testdata/cp-test.txt                                       |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                            |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh -n                                   | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | functional-167406 sudo cat                                 |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                            |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh echo                                 | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | hello                                                      |                   |         |         |                     |                     |
	| ssh     | functional-167406 ssh cat                                  | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | /etc/hostname                                              |                   |         |         |                     |                     |
	| image   | functional-167406 image load --daemon                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-167406   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                          |                   |         |         |                     |                     |
	| image   | functional-167406 image ls                                 | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	| image   | functional-167406 image load --daemon                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-167406   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                          |                   |         |         |                     |                     |
	| image   | functional-167406 image ls                                 | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC | 01 May 24 02:20 UTC |
	| image   | functional-167406 image load --daemon                      | functional-167406 | jenkins | v1.33.0 | 01 May 24 02:20 UTC |                     |
	|         | gcr.io/google-containers/addon-resizer:functional-167406   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                          |                   |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:19:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:19:29.437826   27302 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:19:29.438165   27302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:19:29.438221   27302 out.go:304] Setting ErrFile to fd 2...
	I0501 02:19:29.438230   27302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:19:29.438701   27302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 02:19:29.439585   27302 out.go:298] Setting JSON to false
	I0501 02:19:29.440532   27302 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3711,"bootTime":1714526258,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:19:29.440583   27302 start.go:139] virtualization: kvm guest
	I0501 02:19:29.442564   27302 out.go:177] * [functional-167406] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:19:29.444360   27302 notify.go:220] Checking for updates...
	I0501 02:19:29.444368   27302 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:19:29.445648   27302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:19:29.447273   27302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:19:29.448681   27302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	I0501 02:19:29.449982   27302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:19:29.451239   27302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:19:29.452846   27302 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:19:29.452913   27302 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:19:29.453282   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:19:29.453328   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:19:29.467860   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0501 02:19:29.468232   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:19:29.468835   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:19:29.468843   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:19:29.469189   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:19:29.469423   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:29.500693   27302 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 02:19:29.502118   27302 start.go:297] selected driver: kvm2
	I0501 02:19:29.502122   27302 start.go:901] validating driver "kvm2" against &{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:19:29.502238   27302 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:19:29.502533   27302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:19:29.502594   27302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13407/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:19:29.516334   27302 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:19:29.516947   27302 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:19:29.516997   27302 cni.go:84] Creating CNI manager for ""
	I0501 02:19:29.517005   27302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:19:29.517051   27302 start.go:340] cluster config:
	{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:19:29.517150   27302 iso.go:125] acquiring lock: {Name:mk2f0fca3713b9e2ec58748a6d2af30df1faa5ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:19:29.518781   27302 out.go:177] * Starting "functional-167406" primary control-plane node in "functional-167406" cluster
	I0501 02:19:29.519852   27302 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0501 02:19:29.519871   27302 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18779-13407/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	I0501 02:19:29.519876   27302 cache.go:56] Caching tarball of preloaded images
	I0501 02:19:29.519929   27302 preload.go:173] Found /home/jenkins/minikube-integration/18779-13407/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0501 02:19:29.519935   27302 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on containerd
	I0501 02:19:29.520013   27302 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/config.json ...
	I0501 02:19:29.520168   27302 start.go:360] acquireMachinesLock for functional-167406: {Name:mkdc802449570b9ab245fcfdfa79580f6e5fb7ac Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0501 02:19:29.520199   27302 start.go:364] duration metric: took 21.879µs to acquireMachinesLock for "functional-167406"
	I0501 02:19:29.520208   27302 start.go:96] Skipping create...Using existing machine configuration
	I0501 02:19:29.520211   27302 fix.go:54] fixHost starting: 
	I0501 02:19:29.520447   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:19:29.520486   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:19:29.533583   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0501 02:19:29.533931   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:19:29.534437   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:19:29.534450   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:19:29.534783   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:19:29.534968   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:29.535081   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:19:29.536552   27302 fix.go:112] recreateIfNeeded on functional-167406: state=Running err=<nil>
	W0501 02:19:29.536561   27302 fix.go:138] unexpected machine state, will restart: <nil>
	I0501 02:19:29.538271   27302 out.go:177] * Updating the running kvm2 "functional-167406" VM ...
	I0501 02:19:29.539520   27302 machine.go:94] provisionDockerMachine start ...
	I0501 02:19:29.539539   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:29.539733   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.541923   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.542256   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.542296   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.542428   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:29.542582   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.542731   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.542827   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:29.542960   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:29.543168   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:29.543175   27302 main.go:141] libmachine: About to run SSH command:
	hostname
	I0501 02:19:29.655744   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-167406
	
	I0501 02:19:29.655773   27302 main.go:141] libmachine: (functional-167406) Calling .GetMachineName
	I0501 02:19:29.655991   27302 buildroot.go:166] provisioning hostname "functional-167406"
	I0501 02:19:29.656006   27302 main.go:141] libmachine: (functional-167406) Calling .GetMachineName
	I0501 02:19:29.656190   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.658663   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.659033   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.659051   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.659173   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:29.659306   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.659396   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.659522   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:29.659654   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:29.659806   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:29.659812   27302 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-167406 && echo "functional-167406" | sudo tee /etc/hostname
	I0501 02:19:29.787678   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-167406
	
	I0501 02:19:29.787698   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.790278   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.790574   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.790592   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.790738   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:29.790915   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.791052   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:29.791179   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:29.791296   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:29.791539   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:29.791556   27302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-167406' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-167406/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-167406' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0501 02:19:29.904529   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0501 02:19:29.904545   27302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18779-13407/.minikube CaCertPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18779-13407/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18779-13407/.minikube}
	I0501 02:19:29.904577   27302 buildroot.go:174] setting up certificates
	I0501 02:19:29.904585   27302 provision.go:84] configureAuth start
	I0501 02:19:29.904595   27302 main.go:141] libmachine: (functional-167406) Calling .GetMachineName
	I0501 02:19:29.904823   27302 main.go:141] libmachine: (functional-167406) Calling .GetIP
	I0501 02:19:29.907376   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.907737   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.907764   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.907905   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:29.910052   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.910361   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:29.910376   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:29.910493   27302 provision.go:143] copyHostCerts
	I0501 02:19:29.910529   27302 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13407/.minikube/ca.pem, removing ...
	I0501 02:19:29.910534   27302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13407/.minikube/ca.pem
	I0501 02:19:29.910593   27302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18779-13407/.minikube/ca.pem (1078 bytes)
	I0501 02:19:29.910685   27302 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13407/.minikube/cert.pem, removing ...
	I0501 02:19:29.910689   27302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13407/.minikube/cert.pem
	I0501 02:19:29.910711   27302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18779-13407/.minikube/cert.pem (1123 bytes)
	I0501 02:19:29.910767   27302 exec_runner.go:144] found /home/jenkins/minikube-integration/18779-13407/.minikube/key.pem, removing ...
	I0501 02:19:29.910770   27302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18779-13407/.minikube/key.pem
	I0501 02:19:29.910790   27302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18779-13407/.minikube/key.pem (1675 bytes)
	I0501 02:19:29.910856   27302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18779-13407/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca-key.pem org=jenkins.functional-167406 san=[127.0.0.1 192.168.39.209 functional-167406 localhost minikube]
	I0501 02:19:30.193847   27302 provision.go:177] copyRemoteCerts
	I0501 02:19:30.193886   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0501 02:19:30.193910   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.196409   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.196720   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.196739   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.196903   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.197084   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.197230   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.197366   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.287862   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0501 02:19:30.315195   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0501 02:19:30.343749   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0501 02:19:30.370719   27302 provision.go:87] duration metric: took 466.124066ms to configureAuth
	I0501 02:19:30.370742   27302 buildroot.go:189] setting minikube options for container-runtime
	I0501 02:19:30.370956   27302 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:19:30.370964   27302 machine.go:97] duration metric: took 831.438029ms to provisionDockerMachine
	I0501 02:19:30.370973   27302 start.go:293] postStartSetup for "functional-167406" (driver="kvm2")
	I0501 02:19:30.370984   27302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0501 02:19:30.371006   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.371291   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0501 02:19:30.371313   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.373948   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.374280   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.374299   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.374374   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.374561   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.374711   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.374838   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.466722   27302 ssh_runner.go:195] Run: cat /etc/os-release
	I0501 02:19:30.471556   27302 info.go:137] Remote host: Buildroot 2023.02.9
	I0501 02:19:30.471568   27302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13407/.minikube/addons for local assets ...
	I0501 02:19:30.471626   27302 filesync.go:126] Scanning /home/jenkins/minikube-integration/18779-13407/.minikube/files for local assets ...
	I0501 02:19:30.471697   27302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem -> 207852.pem in /etc/ssl/certs
	I0501 02:19:30.471754   27302 filesync.go:149] local asset: /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/test/nested/copy/20785/hosts -> hosts in /etc/test/nested/copy/20785
	I0501 02:19:30.471794   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/20785
	I0501 02:19:30.483601   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem --> /etc/ssl/certs/207852.pem (1708 bytes)
	I0501 02:19:30.512365   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/test/nested/copy/20785/hosts --> /etc/test/nested/copy/20785/hosts (40 bytes)
	I0501 02:19:30.540651   27302 start.go:296] duration metric: took 169.667782ms for postStartSetup
	I0501 02:19:30.540676   27302 fix.go:56] duration metric: took 1.020464256s for fixHost
	I0501 02:19:30.540691   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.543228   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.543544   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.543565   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.543669   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.543818   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.543982   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.544097   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.544279   27302 main.go:141] libmachine: Using SSH client type: native
	I0501 02:19:30.544432   27302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0501 02:19:30.544436   27302 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0501 02:19:30.656481   27302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714529970.633363586
	
	I0501 02:19:30.656494   27302 fix.go:216] guest clock: 1714529970.633363586
	I0501 02:19:30.656502   27302 fix.go:229] Guest: 2024-05-01 02:19:30.633363586 +0000 UTC Remote: 2024-05-01 02:19:30.540678287 +0000 UTC m=+1.147555627 (delta=92.685299ms)
	I0501 02:19:30.656535   27302 fix.go:200] guest clock delta is within tolerance: 92.685299ms
	I0501 02:19:30.656541   27302 start.go:83] releasing machines lock for "functional-167406", held for 1.136336978s
	I0501 02:19:30.656561   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.656802   27302 main.go:141] libmachine: (functional-167406) Calling .GetIP
	I0501 02:19:30.659387   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.659782   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.659791   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.659960   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.660461   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.660625   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:19:30.660715   27302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0501 02:19:30.660744   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.660850   27302 ssh_runner.go:195] Run: cat /version.json
	I0501 02:19:30.660866   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:19:30.663221   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.663516   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.663551   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.663568   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.663661   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.663819   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.663959   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.663959   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:30.663982   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:30.664155   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:19:30.664231   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.664287   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:19:30.664383   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:19:30.664481   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:19:30.745127   27302 ssh_runner.go:195] Run: systemctl --version
	I0501 02:19:30.768517   27302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0501 02:19:30.774488   27302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0501 02:19:30.774528   27302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0501 02:19:30.785790   27302 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0501 02:19:30.785800   27302 start.go:494] detecting cgroup driver to use...
	I0501 02:19:30.785853   27302 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0501 02:19:30.802226   27302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0501 02:19:30.816978   27302 docker.go:217] disabling cri-docker service (if available) ...
	I0501 02:19:30.817019   27302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0501 02:19:30.831597   27302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0501 02:19:30.845771   27302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0501 02:19:30.985885   27302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0501 02:19:31.138510   27302 docker.go:233] disabling docker service ...
	I0501 02:19:31.138553   27302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0501 02:19:31.160797   27302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0501 02:19:31.182214   27302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0501 02:19:31.342922   27302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0501 02:19:31.527687   27302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0501 02:19:31.546399   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0501 02:19:31.568500   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0501 02:19:31.580338   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0501 02:19:31.601655   27302 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0501 02:19:31.601733   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0501 02:19:31.615894   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:19:31.627888   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0501 02:19:31.639148   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0501 02:19:31.650308   27302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0501 02:19:31.661624   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0501 02:19:31.672388   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0501 02:19:31.684388   27302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0501 02:19:31.696664   27302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0501 02:19:31.706404   27302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0501 02:19:31.719548   27302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:19:31.869704   27302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0501 02:19:31.907722   27302 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0501 02:19:31.907783   27302 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0501 02:19:31.913070   27302 retry.go:31] will retry after 832.519029ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0501 02:19:32.746089   27302 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0501 02:19:32.751634   27302 start.go:562] Will wait 60s for crictl version
	I0501 02:19:32.751676   27302 ssh_runner.go:195] Run: which crictl
	I0501 02:19:32.756086   27302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0501 02:19:32.791299   27302 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.15
	RuntimeApiVersion:  v1
	I0501 02:19:32.791343   27302 ssh_runner.go:195] Run: containerd --version
	I0501 02:19:32.818691   27302 ssh_runner.go:195] Run: containerd --version
	I0501 02:19:32.851005   27302 out.go:177] * Preparing Kubernetes v1.30.0 on containerd 1.7.15 ...
	I0501 02:19:32.852228   27302 main.go:141] libmachine: (functional-167406) Calling .GetIP
	I0501 02:19:32.854728   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:32.855035   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:19:32.855053   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:19:32.855235   27302 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0501 02:19:32.861249   27302 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0501 02:19:32.862435   27302 kubeadm.go:877] updating cluster {Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0501 02:19:32.862527   27302 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0501 02:19:32.862574   27302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:19:32.903073   27302 containerd.go:627] all images are preloaded for containerd runtime.
	I0501 02:19:32.903097   27302 containerd.go:534] Images already preloaded, skipping extraction
	I0501 02:19:32.903148   27302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0501 02:19:32.943554   27302 containerd.go:627] all images are preloaded for containerd runtime.
	I0501 02:19:32.943565   27302 cache_images.go:84] Images are preloaded, skipping loading
	I0501 02:19:32.943572   27302 kubeadm.go:928] updating node { 192.168.39.209 8441 v1.30.0 containerd true true} ...
	I0501 02:19:32.943699   27302 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-167406 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0501 02:19:32.943758   27302 ssh_runner.go:195] Run: sudo crictl info
	I0501 02:19:32.986793   27302 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0501 02:19:32.986807   27302 cni.go:84] Creating CNI manager for ""
	I0501 02:19:32.986815   27302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:19:32.986822   27302 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0501 02:19:32.986839   27302 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.209 APIServerPort:8441 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-167406 NodeName:functional-167406 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubele
tConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0501 02:19:32.986939   27302 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.209
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-167406"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0501 02:19:32.986990   27302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0501 02:19:32.997857   27302 binaries.go:44] Found k8s binaries, skipping transfer
	I0501 02:19:32.997921   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0501 02:19:33.010461   27302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0501 02:19:33.034391   27302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0501 02:19:33.056601   27302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2027 bytes)
	I0501 02:19:33.076127   27302 ssh_runner.go:195] Run: grep 192.168.39.209	control-plane.minikube.internal$ /etc/hosts
	I0501 02:19:33.080439   27302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:19:33.231190   27302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:19:33.249506   27302 certs.go:68] Setting up /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406 for IP: 192.168.39.209
	I0501 02:19:33.249520   27302 certs.go:194] generating shared ca certs ...
	I0501 02:19:33.249539   27302 certs.go:226] acquiring lock for ca certs: {Name:mk634f0288fd77df2d93a075894d5fc692d45f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:19:33.249720   27302 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18779-13407/.minikube/ca.key
	I0501 02:19:33.249779   27302 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18779-13407/.minikube/proxy-client-ca.key
	I0501 02:19:33.249786   27302 certs.go:256] generating profile certs ...
	I0501 02:19:33.249895   27302 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.key
	I0501 02:19:33.249952   27302 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/apiserver.key.2355bc77
	I0501 02:19:33.249982   27302 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/proxy-client.key
	I0501 02:19:33.250137   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/20785.pem (1338 bytes)
	W0501 02:19:33.250169   27302 certs.go:480] ignoring /home/jenkins/minikube-integration/18779-13407/.minikube/certs/20785_empty.pem, impossibly tiny 0 bytes
	I0501 02:19:33.250176   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca-key.pem (1679 bytes)
	I0501 02:19:33.250203   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/ca.pem (1078 bytes)
	I0501 02:19:33.250219   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/cert.pem (1123 bytes)
	I0501 02:19:33.250238   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/certs/key.pem (1675 bytes)
	I0501 02:19:33.250275   27302 certs.go:484] found cert: /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem (1708 bytes)
	I0501 02:19:33.250965   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0501 02:19:33.278798   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0501 02:19:33.304380   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0501 02:19:33.331231   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0501 02:19:33.359390   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0501 02:19:33.387189   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0501 02:19:33.416881   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0501 02:19:33.444381   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0501 02:19:33.472023   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/ssl/certs/207852.pem --> /usr/share/ca-certificates/207852.pem (1708 bytes)
	I0501 02:19:33.498072   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0501 02:19:33.526348   27302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18779-13407/.minikube/certs/20785.pem --> /usr/share/ca-certificates/20785.pem (1338 bytes)
	I0501 02:19:33.554617   27302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0501 02:19:33.573705   27302 ssh_runner.go:195] Run: openssl version
	I0501 02:19:33.579942   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/207852.pem && ln -fs /usr/share/ca-certificates/207852.pem /etc/ssl/certs/207852.pem"
	I0501 02:19:33.593246   27302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/207852.pem
	I0501 02:19:33.598779   27302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  1 02:16 /usr/share/ca-certificates/207852.pem
	I0501 02:19:33.598808   27302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/207852.pem
	I0501 02:19:33.605547   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/207852.pem /etc/ssl/certs/3ec20f2e.0"
	I0501 02:19:33.616599   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0501 02:19:33.629515   27302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:19:33.634618   27302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  1 02:09 /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:19:33.634659   27302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0501 02:19:33.640675   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0501 02:19:33.651055   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20785.pem && ln -fs /usr/share/ca-certificates/20785.pem /etc/ssl/certs/20785.pem"
	I0501 02:19:33.664114   27302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20785.pem
	I0501 02:19:33.668922   27302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  1 02:16 /usr/share/ca-certificates/20785.pem
	I0501 02:19:33.668962   27302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20785.pem
	I0501 02:19:33.675578   27302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20785.pem /etc/ssl/certs/51391683.0"
	I0501 02:19:33.686506   27302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0501 02:19:33.691387   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0501 02:19:33.697494   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0501 02:19:33.703759   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0501 02:19:33.710510   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0501 02:19:33.716307   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0501 02:19:33.722636   27302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0501 02:19:33.728495   27302 kubeadm.go:391] StartCluster: {Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:19:33.728590   27302 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0501 02:19:33.728619   27302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:19:33.769657   27302 cri.go:89] found id: "52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d"
	I0501 02:19:33.769669   27302 cri.go:89] found id: "ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54"
	I0501 02:19:33.769673   27302 cri.go:89] found id: "939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3"
	I0501 02:19:33.769676   27302 cri.go:89] found id: "a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339"
	I0501 02:19:33.769679   27302 cri.go:89] found id: "f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f"
	I0501 02:19:33.769682   27302 cri.go:89] found id: "5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75"
	I0501 02:19:33.769685   27302 cri.go:89] found id: "c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2"
	I0501 02:19:33.769688   27302 cri.go:89] found id: "281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84"
	I0501 02:19:33.769690   27302 cri.go:89] found id: "5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6"
	I0501 02:19:33.769703   27302 cri.go:89] found id: "6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e"
	I0501 02:19:33.769706   27302 cri.go:89] found id: "fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0"
	I0501 02:19:33.769709   27302 cri.go:89] found id: "09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2"
	I0501 02:19:33.769712   27302 cri.go:89] found id: "1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a"
	I0501 02:19:33.769715   27302 cri.go:89] found id: "5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb"
	I0501 02:19:33.769721   27302 cri.go:89] found id: ""
	I0501 02:19:33.769764   27302 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0501 02:19:33.796543   27302 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","pid":1600,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a/rootfs","created":"2024-05-01T02:17:54.261537993Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-xbtf9_049ec84e-c877-484d-b1b1-328156fb477d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-xbtf9","io.kubernetes.cri.sand
box-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"049ec84e-c877-484d-b1b1-328156fb477d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","pid":1692,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261/rootfs","created":"2024-05-01T02:17:54.490978244Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-7db6d8ff4d-xv8bs_ecdc231e-5cfc-4826-9956-e1270e6e9390","io.kubernetes.cri.sandbox-memory":"178257920"
,"io.kubernetes.cri.sandbox-name":"coredns-7db6d8ff4d-xv8bs","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ecdc231e-5cfc-4826-9956-e1270e6e9390"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d","pid":3131,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d/rootfs","created":"2024-05-01T02:18:58.937492084Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.30.0","io.kubernetes.cri.sandbox-id":"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-167406","io.kubernetes.cri.sandbox-namespac
e":"kube-system","io.kubernetes.cri.sandbox-uid":"f9f7ede5128b64464fffeeb6b7a159f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75","pid":2691,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75/rootfs","created":"2024-05-01T02:18:46.237845172Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.11.1","io.kubernetes.cri.sandbox-id":"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261","io.kubernetes.cri.sandbox-name":"coredns-7db6d8ff4d-xv8bs","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ecdc231e-5cfc-4826-9956-e1270e6e9390"},"owner":"root"},{"ociVers
ion":"1.0.2-dev","id":"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","pid":1045,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871/rootfs","created":"2024-05-01T02:17:34.561663829Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-167406_f9f7ede5128b64464fffeeb6b7a159f5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kuber
netes.cri.sandbox-uid":"f9f7ede5128b64464fffeeb6b7a159f5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3","pid":2847,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3/rootfs","created":"2024-05-01T02:18:47.246522059Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.30.0","io.kubernetes.cri.sandbox-id":"a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"81f155d75e2d0f03623586cc74d3e9ec"},"owner":"root"},{"ociVersion":"1.0.2-dev"
,"id":"a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339","pid":2854,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339/rootfs","created":"2024-05-01T02:18:47.241651104Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.12-0","io.kubernetes.cri.sandbox-id":"bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd","io.kubernetes.cri.sandbox-name":"etcd-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fb03fdcce11d87d827499069eedf6b25"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","pid":1053,"status":"running","bundle":"/run/containerd/
io.containerd.runtime.v2.task/k8s.io/a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5/rootfs","created":"2024-05-01T02:17:34.593330575Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-167406_81f155d75e2d0f03623586cc74d3e9ec","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"81f155d75e2d0f03623586cc74d3e9ec"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bdca39c10acda1333c53e0b90122acff31f3c7
81b1a1153e1efe95bb97bb53fd","pid":1046,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd/rootfs","created":"2024-05-01T02:17:34.564975441Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-167406_fb03fdcce11d87d827499069eedf6b25","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fb03fdcce11d87d827499069eedf6b25"},"owner":"root"},{"ociV
ersion":"1.0.2-dev","id":"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2","pid":2601,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2/rootfs","created":"2024-05-01T02:18:41.150171135Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4b8999c0-090e-491d-9b39-9b6e98af676a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","pid":19
18,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed/rootfs","created":"2024-05-01T02:17:55.32966664Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4b8999c0-090e-491d-9b39-9b6e98af676a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4b8999c0-090e-491d-9b39-9b6e98af676a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ebe11aa9f88
04bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54","pid":3129,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54/rootfs","created":"2024-05-01T02:18:58.939294661Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.30.0","io.kubernetes.cri.sandbox-id":"fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"adcc40a72911f3d774df393212cbb315"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f","pid":2848,"status
":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f/rootfs","created":"2024-05-01T02:18:47.226546521Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.30.0","io.kubernetes.cri.sandbox-id":"13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a","io.kubernetes.cri.sandbox-name":"kube-proxy-xbtf9","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"049ec84e-c877-484d-b1b1-328156fb477d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf","pid":1033,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fec06a36743b8d1ce78158fb3e875904d2672f3d46e78
b859736a76389034aaf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf/rootfs","created":"2024-05-01T02:17:34.531465578Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-167406_adcc40a72911f3d774df393212cbb315","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-167406","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"adcc40a72911f3d774df393212cbb315"},"owner":"root"}]
	I0501 02:19:33.796872   27302 cri.go:126] list returned 14 containers
	I0501 02:19:33.796882   27302 cri.go:129] container: {ID:13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a Status:running}
	I0501 02:19:33.796896   27302 cri.go:131] skipping 13168bbfbe961b2676e92691a50f7b252eefedab4a97881bc763d5bf08038c2a - not in ps
	I0501 02:19:33.796901   27302 cri.go:129] container: {ID:2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261 Status:running}
	I0501 02:19:33.796908   27302 cri.go:131] skipping 2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261 - not in ps
	I0501 02:19:33.796912   27302 cri.go:129] container: {ID:52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d Status:running}
	I0501 02:19:33.796920   27302 cri.go:135] skipping {52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d running}: state = "running", want "paused"
	I0501 02:19:33.796928   27302 cri.go:129] container: {ID:5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 Status:running}
	I0501 02:19:33.796935   27302 cri.go:135] skipping {5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 running}: state = "running", want "paused"
	I0501 02:19:33.796940   27302 cri.go:129] container: {ID:88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 Status:running}
	I0501 02:19:33.796948   27302 cri.go:131] skipping 88ce05d0d4379251ab9e74a6764480130b43b3fa31e5c7ab6377f9ad83c5f871 - not in ps
	I0501 02:19:33.796952   27302 cri.go:129] container: {ID:939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 Status:running}
	I0501 02:19:33.796959   27302 cri.go:135] skipping {939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 running}: state = "running", want "paused"
	I0501 02:19:33.796964   27302 cri.go:129] container: {ID:a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 Status:running}
	I0501 02:19:33.796968   27302 cri.go:135] skipping {a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 running}: state = "running", want "paused"
	I0501 02:19:33.796971   27302 cri.go:129] container: {ID:a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5 Status:running}
	I0501 02:19:33.796974   27302 cri.go:131] skipping a3c933aaaf5a99593b0c7a0a2f872ca0339ab8a9e953012ab6d283d107b731b5 - not in ps
	I0501 02:19:33.796976   27302 cri.go:129] container: {ID:bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd Status:running}
	I0501 02:19:33.796979   27302 cri.go:131] skipping bdca39c10acda1333c53e0b90122acff31f3c781b1a1153e1efe95bb97bb53fd - not in ps
	I0501 02:19:33.796981   27302 cri.go:129] container: {ID:c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 Status:running}
	I0501 02:19:33.796985   27302 cri.go:135] skipping {c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 running}: state = "running", want "paused"
	I0501 02:19:33.796987   27302 cri.go:129] container: {ID:d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed Status:running}
	I0501 02:19:33.796991   27302 cri.go:131] skipping d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed - not in ps
	I0501 02:19:33.796993   27302 cri.go:129] container: {ID:ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 Status:running}
	I0501 02:19:33.796996   27302 cri.go:135] skipping {ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 running}: state = "running", want "paused"
	I0501 02:19:33.796999   27302 cri.go:129] container: {ID:f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f Status:running}
	I0501 02:19:33.797002   27302 cri.go:135] skipping {f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f running}: state = "running", want "paused"
	I0501 02:19:33.797010   27302 cri.go:129] container: {ID:fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf Status:running}
	I0501 02:19:33.797014   27302 cri.go:131] skipping fec06a36743b8d1ce78158fb3e875904d2672f3d46e78b859736a76389034aaf - not in ps
	I0501 02:19:33.797056   27302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0501 02:19:33.809208   27302 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0501 02:19:33.809215   27302 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0501 02:19:33.809218   27302 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0501 02:19:33.809251   27302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0501 02:19:33.820117   27302 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:19:33.820734   27302 kubeconfig.go:125] found "functional-167406" server: "https://192.168.39.209:8441"
	I0501 02:19:33.822281   27302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0501 02:19:33.833529   27302 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0501 02:19:33.833560   27302 kubeadm.go:1154] stopping kube-system containers ...
	I0501 02:19:33.833570   27302 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0501 02:19:33.833602   27302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0501 02:19:33.876099   27302 cri.go:89] found id: "52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d"
	I0501 02:19:33.876109   27302 cri.go:89] found id: "ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54"
	I0501 02:19:33.876112   27302 cri.go:89] found id: "939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3"
	I0501 02:19:33.876114   27302 cri.go:89] found id: "a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339"
	I0501 02:19:33.876121   27302 cri.go:89] found id: "f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f"
	I0501 02:19:33.876123   27302 cri.go:89] found id: "5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75"
	I0501 02:19:33.876125   27302 cri.go:89] found id: "c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2"
	I0501 02:19:33.876126   27302 cri.go:89] found id: "281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84"
	I0501 02:19:33.876128   27302 cri.go:89] found id: "5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6"
	I0501 02:19:33.876132   27302 cri.go:89] found id: "6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e"
	I0501 02:19:33.876133   27302 cri.go:89] found id: "fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0"
	I0501 02:19:33.876135   27302 cri.go:89] found id: "09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2"
	I0501 02:19:33.876137   27302 cri.go:89] found id: "1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a"
	I0501 02:19:33.876138   27302 cri.go:89] found id: "5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb"
	I0501 02:19:33.876143   27302 cri.go:89] found id: ""
	I0501 02:19:33.876147   27302 cri.go:234] Stopping containers: [52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb]
	I0501 02:19:33.876187   27302 ssh_runner.go:195] Run: which crictl
	I0501 02:19:33.880970   27302 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb
	I0501 02:19:49.400461   27302 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb: (15.5
19438939s)
	W0501 02:19:49.400521   27302 kubeadm.go:638] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54 939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3 a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339 f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f 5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75 c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2 281bc9607b8141be3442c67e2a5120fd5117a284b42a3ead6902673c1a19eb84 5b632626e8403a57504a35b83a4c918da61898f206b53e5c6ed4b0dd93cea4c6 6b28813b92a8b359a1174a4c382c403a7d4ed8e0f912c3690a4e93a903338c4e fff2cd3c1952ed435b47dc10274b681d08357d4ed13a48b937ea92c5bf35bff0 09d95143f9a211dc3faeb0d57043a2092229fbb316dfd816662f8dc18c962be2 1f5dcc16765a8d682cfcbe7cd84e23b87ffe1c147a7e461eb3d26acb57ae582a 5e1e6e
2bcdde84d99af695d7af68c58cb7d4edd6d762bb0ea02236b174dddbcb: Process exited with status 1
	stdout:
	52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d
	ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54
	939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3
	a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339
	f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f
	5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75
	
	stderr:
	E0501 02:19:49.373801    3825 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2\": not found" containerID="c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2"
	time="2024-05-01T02:19:49Z" level=fatal msg="stopping the container \"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"c84592f633be7982428f789ce2d1ab1997af7782f3f1d6ddb79537fbf47bf4d2\": not found"
	I0501 02:19:49.400576   27302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0501 02:19:49.442489   27302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0501 02:19:49.453682   27302 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 May  1 02:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 May  1 02:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 May  1 02:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 May  1 02:18 /etc/kubernetes/scheduler.conf
	
	I0501 02:19:49.453722   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0501 02:19:49.463450   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0501 02:19:49.473268   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0501 02:19:49.482593   27302 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:19:49.482620   27302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0501 02:19:49.492406   27302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0501 02:19:49.501589   27302 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:19:49.501621   27302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0501 02:19:49.511385   27302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0501 02:19:49.521299   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:49.576852   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.275401   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.501617   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.586395   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:50.669276   27302 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:19:50.669347   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:19:51.169802   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:19:51.670333   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:19:51.687969   27302 api_server.go:72] duration metric: took 1.018693775s to wait for apiserver process to appear ...
	I0501 02:19:51.687984   27302 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:19:51.688003   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:52.986291   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0501 02:19:52.986313   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0501 02:19:52.986323   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:53.043640   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:53.043668   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:53.188909   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:53.193215   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:53.193230   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:53.688857   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:53.693916   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:53.693934   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:54.188678   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:54.205628   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0501 02:19:54.205654   27302 api_server.go:103] status: https://192.168.39.209:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0501 02:19:54.688294   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:19:54.692103   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 200:
	ok
	I0501 02:19:54.698200   27302 api_server.go:141] control plane version: v1.30.0
	I0501 02:19:54.698212   27302 api_server.go:131] duration metric: took 3.010224858s to wait for apiserver health ...
	I0501 02:19:54.698218   27302 cni.go:84] Creating CNI manager for ""
	I0501 02:19:54.698223   27302 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:19:54.699989   27302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0501 02:19:54.701380   27302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0501 02:19:54.716172   27302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0501 02:19:54.741211   27302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:19:54.757093   27302 system_pods.go:59] 7 kube-system pods found
	I0501 02:19:54.757117   27302 system_pods.go:61] "coredns-7db6d8ff4d-xv8bs" [ecdc231e-5cfc-4826-9956-e1270e6e9390] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0501 02:19:54.757122   27302 system_pods.go:61] "etcd-functional-167406" [c756611c-5955-4eb6-9e66-555a18726767] Running
	I0501 02:19:54.757130   27302 system_pods.go:61] "kube-apiserver-functional-167406" [4cd1e668-c6c5-42d0-8eff-11d1e7a37cb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0501 02:19:54.757141   27302 system_pods.go:61] "kube-controller-manager-functional-167406" [753f721a-d8f9-4aae-a8e5-42e47750f595] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0501 02:19:54.757148   27302 system_pods.go:61] "kube-proxy-xbtf9" [049ec84e-c877-484d-b1b1-328156fb477d] Running
	I0501 02:19:54.757156   27302 system_pods.go:61] "kube-scheduler-functional-167406" [d249cb29-5a87-45f6-90fa-4b962d7394b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0501 02:19:54.757162   27302 system_pods.go:61] "storage-provisioner" [4b8999c0-090e-491d-9b39-9b6e98af676a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 02:19:54.757168   27302 system_pods.go:74] duration metric: took 15.946257ms to wait for pod list to return data ...
	I0501 02:19:54.757176   27302 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:19:54.760302   27302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:19:54.760318   27302 node_conditions.go:123] node cpu capacity is 2
	I0501 02:19:54.760328   27302 node_conditions.go:105] duration metric: took 3.147862ms to run NodePressure ...
	I0501 02:19:54.760346   27302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0501 02:19:55.029033   27302 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0501 02:19:55.034633   27302 kubeadm.go:733] kubelet initialised
	I0501 02:19:55.034651   27302 kubeadm.go:734] duration metric: took 5.595558ms waiting for restarted kubelet to initialise ...
	I0501 02:19:55.034659   27302 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:19:55.045035   27302 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:19:57.051415   27302 pod_ready.go:102] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"False"
	I0501 02:19:59.054146   27302 pod_ready.go:102] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"False"
	I0501 02:20:01.552035   27302 pod_ready.go:102] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"False"
	I0501 02:20:03.052650   27302 pod_ready.go:92] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:03.052662   27302 pod_ready.go:81] duration metric: took 8.007609985s for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:03.052668   27302 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:03.058012   27302 pod_ready.go:92] pod "etcd-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:03.058023   27302 pod_ready.go:81] duration metric: took 5.349333ms for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:03.058033   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:05.064872   27302 pod_ready.go:102] pod "kube-apiserver-functional-167406" in "kube-system" namespace has status "Ready":"False"
	I0501 02:20:05.565939   27302 pod_ready.go:92] pod "kube-apiserver-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:05.565953   27302 pod_ready.go:81] duration metric: took 2.507911806s for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:05.565964   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.072548   27302 pod_ready.go:92] pod "kube-controller-manager-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.072562   27302 pod_ready.go:81] duration metric: took 506.587642ms for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.072570   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.077468   27302 pod_ready.go:92] pod "kube-proxy-xbtf9" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.077475   27302 pod_ready.go:81] duration metric: took 4.901001ms for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.077482   27302 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.082661   27302 pod_ready.go:92] pod "kube-scheduler-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.082667   27302 pod_ready.go:81] duration metric: took 5.180679ms for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.082673   27302 pod_ready.go:38] duration metric: took 11.048005881s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:20:06.082686   27302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0501 02:20:06.096020   27302 ops.go:34] apiserver oom_adj: -16
	I0501 02:20:06.096030   27302 kubeadm.go:591] duration metric: took 32.286806378s to restartPrimaryControlPlane
	I0501 02:20:06.096037   27302 kubeadm.go:393] duration metric: took 32.367551096s to StartCluster
	I0501 02:20:06.096053   27302 settings.go:142] acquiring lock: {Name:mk5412669f58875b6a0bd1d6a1dcb2e935592f4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:20:06.096132   27302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:20:06.096736   27302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13407/kubeconfig: {Name:mk4670d16c1b854bc97e144ac00ddd58ecc61c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:20:06.096929   27302 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0501 02:20:06.098607   27302 out.go:177] * Verifying Kubernetes components...
	I0501 02:20:06.097009   27302 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0501 02:20:06.098632   27302 addons.go:69] Setting storage-provisioner=true in profile "functional-167406"
	I0501 02:20:06.099827   27302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0501 02:20:06.099852   27302 addons.go:234] Setting addon storage-provisioner=true in "functional-167406"
	W0501 02:20:06.099860   27302 addons.go:243] addon storage-provisioner should already be in state true
	I0501 02:20:06.099881   27302 host.go:66] Checking if "functional-167406" exists ...
	I0501 02:20:06.097108   27302 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:20:06.098644   27302 addons.go:69] Setting default-storageclass=true in profile "functional-167406"
	I0501 02:20:06.099986   27302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-167406"
	I0501 02:20:06.100179   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.100220   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.100306   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.100341   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.114376   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44137
	I0501 02:20:06.114748   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.115211   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.115227   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.115351   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40619
	I0501 02:20:06.115569   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.115713   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.115765   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:20:06.116239   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.116255   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.116544   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.117096   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.117132   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.118355   27302 addons.go:234] Setting addon default-storageclass=true in "functional-167406"
	W0501 02:20:06.118363   27302 addons.go:243] addon default-storageclass should already be in state true
	I0501 02:20:06.118386   27302 host.go:66] Checking if "functional-167406" exists ...
	I0501 02:20:06.118724   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.118757   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.132056   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I0501 02:20:06.132367   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.132796   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.132824   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.133092   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.133652   27302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:06.133687   27302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:06.135199   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0501 02:20:06.135589   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.136121   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.136138   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.136403   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.136599   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:20:06.138120   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:06.140321   27302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0501 02:20:06.141799   27302 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:20:06.141809   27302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0501 02:20:06.141830   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:20:06.144487   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.144874   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:20:06.144901   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.145049   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:20:06.145233   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:20:06.145425   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:20:06.145550   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:20:06.148575   27302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36739
	I0501 02:20:06.148910   27302 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:06.149344   27302 main.go:141] libmachine: Using API Version  1
	I0501 02:20:06.149353   27302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:06.149639   27302 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:06.149825   27302 main.go:141] libmachine: (functional-167406) Calling .GetState
	I0501 02:20:06.151057   27302 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:06.151309   27302 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0501 02:20:06.151318   27302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0501 02:20:06.151332   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
	I0501 02:20:06.153814   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.154212   27302 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
	I0501 02:20:06.154230   27302 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
	I0501 02:20:06.154354   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
	I0501 02:20:06.154522   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
	I0501 02:20:06.154665   27302 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
	I0501 02:20:06.154784   27302 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
	I0501 02:20:06.291969   27302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0501 02:20:06.310462   27302 node_ready.go:35] waiting up to 6m0s for node "functional-167406" to be "Ready" ...
	I0501 02:20:06.314577   27302 node_ready.go:49] node "functional-167406" has status "Ready":"True"
	I0501 02:20:06.314587   27302 node_ready.go:38] duration metric: took 4.105122ms for node "functional-167406" to be "Ready" ...
	I0501 02:20:06.314595   27302 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:20:06.320143   27302 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.392851   27302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0501 02:20:06.403455   27302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0501 02:20:06.650181   27302 pod_ready.go:92] pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:06.650195   27302 pod_ready.go:81] duration metric: took 330.040348ms for pod "coredns-7db6d8ff4d-xv8bs" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:06.650206   27302 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.049853   27302 pod_ready.go:92] pod "etcd-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:07.049864   27302 pod_ready.go:81] duration metric: took 399.652977ms for pod "etcd-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.049873   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.068039   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068053   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.068102   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068112   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.068321   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.068325   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.068330   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.068335   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.068343   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068345   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.068350   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.068352   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.069878   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.069888   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.069896   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.069905   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.070002   27302 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
	I0501 02:20:07.070009   27302 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
	I0501 02:20:07.079813   27302 main.go:141] libmachine: Making call to close driver server
	I0501 02:20:07.079823   27302 main.go:141] libmachine: (functional-167406) Calling .Close
	I0501 02:20:07.080103   27302 main.go:141] libmachine: Successfully made call to close driver server
	I0501 02:20:07.080112   27302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0501 02:20:07.082343   27302 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0501 02:20:07.083683   27302 addons.go:505] duration metric: took 986.687248ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0501 02:20:07.449877   27302 pod_ready.go:92] pod "kube-apiserver-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:07.449897   27302 pod_ready.go:81] duration metric: took 400.018258ms for pod "kube-apiserver-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.449908   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.849418   27302 pod_ready.go:92] pod "kube-controller-manager-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:07.849429   27302 pod_ready.go:81] duration metric: took 399.514247ms for pod "kube-controller-manager-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:07.849437   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.249116   27302 pod_ready.go:92] pod "kube-proxy-xbtf9" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:08.249126   27302 pod_ready.go:81] duration metric: took 399.68419ms for pod "kube-proxy-xbtf9" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.249134   27302 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.662879   27302 pod_ready.go:92] pod "kube-scheduler-functional-167406" in "kube-system" namespace has status "Ready":"True"
	I0501 02:20:08.662889   27302 pod_ready.go:81] duration metric: took 413.749499ms for pod "kube-scheduler-functional-167406" in "kube-system" namespace to be "Ready" ...
	I0501 02:20:08.662897   27302 pod_ready.go:38] duration metric: took 2.348293104s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0501 02:20:08.662908   27302 api_server.go:52] waiting for apiserver process to appear ...
	I0501 02:20:08.662954   27302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:20:08.693543   27302 api_server.go:72] duration metric: took 2.596595813s to wait for apiserver process to appear ...
	I0501 02:20:08.693556   27302 api_server.go:88] waiting for apiserver healthz status ...
	I0501 02:20:08.693579   27302 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8441/healthz ...
	I0501 02:20:08.712207   27302 api_server.go:279] https://192.168.39.209:8441/healthz returned 200:
	ok
	I0501 02:20:08.713171   27302 api_server.go:141] control plane version: v1.30.0
	I0501 02:20:08.713188   27302 api_server.go:131] duration metric: took 19.62622ms to wait for apiserver health ...
	I0501 02:20:08.713196   27302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0501 02:20:08.853696   27302 system_pods.go:59] 7 kube-system pods found
	I0501 02:20:08.853712   27302 system_pods.go:61] "coredns-7db6d8ff4d-xv8bs" [ecdc231e-5cfc-4826-9956-e1270e6e9390] Running
	I0501 02:20:08.853718   27302 system_pods.go:61] "etcd-functional-167406" [c756611c-5955-4eb6-9e66-555a18726767] Running
	I0501 02:20:08.853722   27302 system_pods.go:61] "kube-apiserver-functional-167406" [4cd1e668-c6c5-42d0-8eff-11d1e7a37cb5] Running
	I0501 02:20:08.853726   27302 system_pods.go:61] "kube-controller-manager-functional-167406" [753f721a-d8f9-4aae-a8e5-42e47750f595] Running
	I0501 02:20:08.853730   27302 system_pods.go:61] "kube-proxy-xbtf9" [049ec84e-c877-484d-b1b1-328156fb477d] Running
	I0501 02:20:08.853732   27302 system_pods.go:61] "kube-scheduler-functional-167406" [d249cb29-5a87-45f6-90fa-4b962d7394b6] Running
	I0501 02:20:08.853736   27302 system_pods.go:61] "storage-provisioner" [4b8999c0-090e-491d-9b39-9b6e98af676a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 02:20:08.853743   27302 system_pods.go:74] duration metric: took 140.541233ms to wait for pod list to return data ...
	I0501 02:20:08.853752   27302 default_sa.go:34] waiting for default service account to be created ...
	I0501 02:20:09.049668   27302 default_sa.go:45] found service account: "default"
	I0501 02:20:09.049681   27302 default_sa.go:55] duration metric: took 195.92317ms for default service account to be created ...
	I0501 02:20:09.049690   27302 system_pods.go:116] waiting for k8s-apps to be running ...
	I0501 02:20:09.255439   27302 system_pods.go:86] 7 kube-system pods found
	I0501 02:20:09.255454   27302 system_pods.go:89] "coredns-7db6d8ff4d-xv8bs" [ecdc231e-5cfc-4826-9956-e1270e6e9390] Running
	I0501 02:20:09.255460   27302 system_pods.go:89] "etcd-functional-167406" [c756611c-5955-4eb6-9e66-555a18726767] Running
	I0501 02:20:09.255466   27302 system_pods.go:89] "kube-apiserver-functional-167406" [4cd1e668-c6c5-42d0-8eff-11d1e7a37cb5] Running
	I0501 02:20:09.255471   27302 system_pods.go:89] "kube-controller-manager-functional-167406" [753f721a-d8f9-4aae-a8e5-42e47750f595] Running
	I0501 02:20:09.255475   27302 system_pods.go:89] "kube-proxy-xbtf9" [049ec84e-c877-484d-b1b1-328156fb477d] Running
	I0501 02:20:09.255478   27302 system_pods.go:89] "kube-scheduler-functional-167406" [d249cb29-5a87-45f6-90fa-4b962d7394b6] Running
	I0501 02:20:09.255485   27302 system_pods.go:89] "storage-provisioner" [4b8999c0-090e-491d-9b39-9b6e98af676a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0501 02:20:09.255492   27302 system_pods.go:126] duration metric: took 205.797561ms to wait for k8s-apps to be running ...
	I0501 02:20:09.255501   27302 system_svc.go:44] waiting for kubelet service to be running ....
	I0501 02:20:09.255557   27302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:20:09.275685   27302 system_svc.go:56] duration metric: took 20.175711ms WaitForService to wait for kubelet
	I0501 02:20:09.275704   27302 kubeadm.go:576] duration metric: took 3.178756744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0501 02:20:09.275720   27302 node_conditions.go:102] verifying NodePressure condition ...
	I0501 02:20:09.449853   27302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0501 02:20:09.449866   27302 node_conditions.go:123] node cpu capacity is 2
	I0501 02:20:09.449874   27302 node_conditions.go:105] duration metric: took 174.150822ms to run NodePressure ...
	I0501 02:20:09.449883   27302 start.go:240] waiting for startup goroutines ...
	I0501 02:20:09.449889   27302 start.go:245] waiting for cluster config update ...
	I0501 02:20:09.449897   27302 start.go:254] writing updated cluster config ...
	I0501 02:20:09.450124   27302 ssh_runner.go:195] Run: rm -f paused
	I0501 02:20:09.497259   27302 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0501 02:20:09.499251   27302 out.go:177] * Done! kubectl is now configured to use "functional-167406" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ae6f4e38ab4f3       6e38f40d628db       18 seconds ago       Running             storage-provisioner       4                   d3f41e0f975da       storage-provisioner
	ef9868f7ee3c3       cbb01a7bd410d       32 seconds ago       Running             coredns                   2                   2132b99b3eb2c       coredns-7db6d8ff4d-xv8bs
	b8e78e9b1aa3a       6e38f40d628db       32 seconds ago       Exited              storage-provisioner       3                   d3f41e0f975da       storage-provisioner
	429a24a39fec5       c42f13656d0b2       35 seconds ago       Running             kube-apiserver            2                   88ce05d0d4379       kube-apiserver-functional-167406
	350765a60a825       c7aad43836fa5       35 seconds ago       Running             kube-controller-manager   2                   fec06a36743b8       kube-controller-manager-functional-167406
	a513f3286b775       259c8277fcbbc       42 seconds ago       Running             kube-scheduler            2                   a3c933aaaf5a9       kube-scheduler-functional-167406
	3b377dde86d26       3861cfcd7c04c       42 seconds ago       Running             etcd                      2                   bdca39c10acda       etcd-functional-167406
	6df6abb34b88d       a0bf559e280cf       42 seconds ago       Running             kube-proxy                2                   13168bbfbe961       kube-proxy-xbtf9
	52ce55f010233       c42f13656d0b2       About a minute ago   Exited              kube-apiserver            1                   88ce05d0d4379       kube-apiserver-functional-167406
	ebe11aa9f8804       c7aad43836fa5       About a minute ago   Exited              kube-controller-manager   1                   fec06a36743b8       kube-controller-manager-functional-167406
	939e53f1e1db0       259c8277fcbbc       About a minute ago   Exited              kube-scheduler            1                   a3c933aaaf5a9       kube-scheduler-functional-167406
	a1f43ae8da4b3       3861cfcd7c04c       About a minute ago   Exited              etcd                      1                   bdca39c10acda       etcd-functional-167406
	f0dc76865d087       a0bf559e280cf       About a minute ago   Exited              kube-proxy                1                   13168bbfbe961       kube-proxy-xbtf9
	5652211ff7b29       cbb01a7bd410d       About a minute ago   Exited              coredns                   1                   2132b99b3eb2c       coredns-7db6d8ff4d-xv8bs
	
	
	==> containerd <==
	May 01 02:19:51 functional-167406 containerd[3593]: time="2024-05-01T02:19:51.156207467Z" level=info msg="StartContainer for \"350765a60a82586dd2a69686a601b5d16ad68d05a64cd6e4d3359df1866500b5\" returns successfully"
	May 01 02:19:51 functional-167406 containerd[3593]: time="2024-05-01T02:19:51.171550944Z" level=info msg="StartContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" returns successfully"
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.068013219Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.873698453Z" level=info msg="CreateContainer within sandbox \"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261\" for container &ContainerMetadata{Name:coredns,Attempt:2,}"
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.874354592Z" level=info msg="CreateContainer within sandbox \"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:3,}"
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.914443000Z" level=info msg="CreateContainer within sandbox \"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed\" for &ContainerMetadata{Name:storage-provisioner,Attempt:3,} returns container id \"b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965\""
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.914946703Z" level=info msg="StartContainer for \"b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965\""
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.926616911Z" level=info msg="CreateContainer within sandbox \"2132b99b3eb2c90d0e871254d4f2d745a97cfd6e637eec88e4049c43b2a93261\" for &ContainerMetadata{Name:coredns,Attempt:2,} returns container id \"ef9868f7ee3c37e5d0905ec5f86a854f4d72fd6fa06197f96f693fcc6e53a485\""
	May 01 02:19:53 functional-167406 containerd[3593]: time="2024-05-01T02:19:53.927031332Z" level=info msg="StartContainer for \"ef9868f7ee3c37e5d0905ec5f86a854f4d72fd6fa06197f96f693fcc6e53a485\""
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.048753981Z" level=info msg="StartContainer for \"b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965\" returns successfully"
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.119825800Z" level=info msg="StartContainer for \"ef9868f7ee3c37e5d0905ec5f86a854f4d72fd6fa06197f96f693fcc6e53a485\" returns successfully"
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.142481068Z" level=info msg="shim disconnected" id=b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965 namespace=k8s.io
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.142643013Z" level=warning msg="cleaning up after shim disconnected" id=b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965 namespace=k8s.io
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.142773469Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.696328638Z" level=info msg="RemoveContainer for \"7aaa1a01414d1ba659b5c8289583d21c96e0824437226a7421dfa4ff22fa0fa5\""
	May 01 02:19:54 functional-167406 containerd[3593]: time="2024-05-01T02:19:54.708310102Z" level=info msg="RemoveContainer for \"7aaa1a01414d1ba659b5c8289583d21c96e0824437226a7421dfa4ff22fa0fa5\" returns successfully"
	May 01 02:20:08 functional-167406 containerd[3593]: time="2024-05-01T02:20:08.604591293Z" level=info msg="CreateContainer within sandbox \"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:4,}"
	May 01 02:20:08 functional-167406 containerd[3593]: time="2024-05-01T02:20:08.626050966Z" level=info msg="CreateContainer within sandbox \"d3f41e0f975da05044bd83bafb86740f8aa8c4dd48528bd17048a570a5bf30ed\" for &ContainerMetadata{Name:storage-provisioner,Attempt:4,} returns container id \"ae6f4e38ab4f3bee5d7e47c976761288d60a681e7e951889c3578e892750495b\""
	May 01 02:20:08 functional-167406 containerd[3593]: time="2024-05-01T02:20:08.626735303Z" level=info msg="StartContainer for \"ae6f4e38ab4f3bee5d7e47c976761288d60a681e7e951889c3578e892750495b\""
	May 01 02:20:08 functional-167406 containerd[3593]: time="2024-05-01T02:20:08.749783208Z" level=info msg="StartContainer for \"ae6f4e38ab4f3bee5d7e47c976761288d60a681e7e951889c3578e892750495b\" returns successfully"
	May 01 02:20:10 functional-167406 containerd[3593]: time="2024-05-01T02:20:10.555611067Z" level=info msg="StopContainer for \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" with timeout 30 (s)"
	May 01 02:20:10 functional-167406 containerd[3593]: time="2024-05-01T02:20:10.556163529Z" level=info msg="Stop container \"429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5\" with signal terminated"
	May 01 02:20:19 functional-167406 containerd[3593]: time="2024-05-01T02:20:19.351055932Z" level=info msg="ImageCreate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\""
	May 01 02:20:19 functional-167406 containerd[3593]: time="2024-05-01T02:20:19.359428272Z" level=info msg="ImageCreate event name:\"sha256:b08046378d77c9dfdab5fbe738244949bc9d487d7b394813b7209ff1f43b82cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	May 01 02:20:19 functional-167406 containerd[3593]: time="2024-05-01T02:20:19.359835265Z" level=info msg="ImageUpdate event name:\"gcr.io/google-containers/addon-resizer:functional-167406\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> coredns [5652211ff7b2959780a3df6c3982721c0635c2fc56281ef1f6b91dfd2a204b75] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43474 - 46251 "HINFO IN 6093638740258044659.1554125567718258750. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008772047s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: unknown (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: unknown (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ef9868f7ee3c37e5d0905ec5f86a854f4d72fd6fa06197f96f693fcc6e53a485] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51551 - 12396 "HINFO IN 7161565364375486857.4859467522399385342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006762819s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[May 1 02:18] kauditd_printk_skb: 94 callbacks suppressed
	[ +32.076735] systemd-fstab-generator[2180]: Ignoring "noauto" option for root device
	[  +0.169403] systemd-fstab-generator[2192]: Ignoring "noauto" option for root device
	[  +0.211042] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[  +0.165983] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +0.323845] systemd-fstab-generator[2247]: Ignoring "noauto" option for root device
	[  +2.137091] systemd-fstab-generator[2452]: Ignoring "noauto" option for root device
	[  +0.094208] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.831325] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.516674] kauditd_printk_skb: 14 callbacks suppressed
	[  +1.457832] systemd-fstab-generator[3047]: Ignoring "noauto" option for root device
	[May 1 02:19] kauditd_printk_skb: 24 callbacks suppressed
	[  +7.754628] systemd-fstab-generator[3215]: Ignoring "noauto" option for root device
	[ +14.125843] systemd-fstab-generator[3518]: Ignoring "noauto" option for root device
	[  +0.076849] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.077827] systemd-fstab-generator[3530]: Ignoring "noauto" option for root device
	[  +0.188600] systemd-fstab-generator[3544]: Ignoring "noauto" option for root device
	[  +0.171319] systemd-fstab-generator[3556]: Ignoring "noauto" option for root device
	[  +0.356766] systemd-fstab-generator[3585]: Ignoring "noauto" option for root device
	[  +1.365998] systemd-fstab-generator[3741]: Ignoring "noauto" option for root device
	[ +10.881538] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.346698] kauditd_printk_skb: 17 callbacks suppressed
	[  +1.027943] systemd-fstab-generator[4273]: Ignoring "noauto" option for root device
	[  +4.180252] kauditd_printk_skb: 36 callbacks suppressed
	[May 1 02:20] systemd-fstab-generator[4573]: Ignoring "noauto" option for root device
	
	
	==> etcd [3b377dde86d267c8742b885c6b59382115c63d70d37c1823e0e1d10f97eff8b3] <==
	{"level":"info","ts":"2024-05-01T02:19:44.776714Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:19:44.77674Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-01T02:19:44.777129Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b switched to configuration voters=(8441320971333687067)"}
	{"level":"info","ts":"2024-05-01T02:19:44.777351Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","added-peer-id":"752598b30b66571b","added-peer-peer-urls":["https://192.168.39.209:2380"]}
	{"level":"info","ts":"2024-05-01T02:19:44.777547Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cbe1704648cf4c0c","local-member-id":"752598b30b66571b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:19:44.777589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-01T02:19:44.781098Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-01T02:19:44.781692Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"752598b30b66571b","initial-advertise-peer-urls":["https://192.168.39.209:2380"],"listen-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.209:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-01T02:19:44.781836Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-01T02:19:44.782391Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.782447Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:46.149524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:19:46.149853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.149916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 4"}
	{"level":"info","ts":"2024-05-01T02:19:46.152677Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:functional-167406 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T02:19:46.152701Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:19:46.152914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:19:46.153408Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T02:19:46.153471Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T02:19:46.155829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-05-01T02:19:46.156978Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [a1f43ae8da4b3335589ba2866a30b71e1f817377378e3f5d5c38c0c1438a8339] <==
	{"level":"info","ts":"2024-05-01T02:18:47.383086Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:18:48.759417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgPreVoteResp from 752598b30b66571b at term 2"}
	{"level":"info","ts":"2024-05-01T02:18:48.759608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became candidate at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b received MsgVoteResp from 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"752598b30b66571b became leader at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.759636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 752598b30b66571b elected leader 752598b30b66571b at term 3"}
	{"level":"info","ts":"2024-05-01T02:18:48.767118Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:18:48.767067Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"752598b30b66571b","local-member-attributes":"{Name:functional-167406 ClientURLs:[https://192.168.39.209:2379]}","request-path":"/0/members/752598b30b66571b/attributes","cluster-id":"cbe1704648cf4c0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-01T02:18:48.768075Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-01T02:18:48.768693Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-01T02:18:48.768883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-01T02:18:48.769381Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.209:2379"}
	{"level":"info","ts":"2024-05-01T02:18:48.770832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-01T02:19:44.172843Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-01T02:19:44.172953Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-167406","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	{"level":"warn","ts":"2024-05-01T02:19:44.173117Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.17315Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.175169Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-01T02:19:44.175192Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.209:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-01T02:19:44.175362Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"752598b30b66571b","current-leader-member-id":"752598b30b66571b"}
	{"level":"info","ts":"2024-05-01T02:19:44.178843Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.179043Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.209:2380"}
	{"level":"info","ts":"2024-05-01T02:19:44.179065Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-167406","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.209:2380"],"advertise-client-urls":["https://192.168.39.209:2379"]}
	
	
	==> kernel <==
	 02:20:27 up 3 min,  0 users,  load average: 1.01, 0.52, 0.20
	Linux functional-167406 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [429a24a39fec5c3b0d7b4a5dabc2c9d824d1eabdb7ca603c247c75ccfc1e76a5] <==
	I0501 02:20:10.588037       1 controller.go:167] Shutting down OpenAPI controller
	I0501 02:20:10.588047       1 autoregister_controller.go:165] Shutting down autoregister controller
	I0501 02:20:10.588059       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0501 02:20:10.588069       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0501 02:20:10.588079       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0501 02:20:10.588085       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0501 02:20:10.588150       1 controller.go:129] Ending legacy_token_tracking_controller
	I0501 02:20:10.588186       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0501 02:20:10.592418       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0501 02:20:10.594973       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0501 02:20:10.595024       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0501 02:20:10.595151       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 02:20:10.595337       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:20:10.595353       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0501 02:20:10.595372       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0501 02:20:10.595406       1 secure_serving.go:258] Stopped listening on [::]:8441
	I0501 02:20:10.595418       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0501 02:20:10.595835       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 02:20:10.598324       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:20:10.601393       1 controller.go:157] Shutting down quota evaluator
	I0501 02:20:10.601407       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:20:10.601629       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:20:10.601638       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:20:10.601643       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:20:10.601647       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [52ce55f010233a04018ba1a6afb48d12e92343c9e7ba29f083f6dd6303b9a30d] <==
	I0501 02:19:33.934728       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0501 02:19:33.935337       1 naming_controller.go:302] Shutting down NamingConditionController
	I0501 02:19:33.937005       1 controller.go:117] Shutting down OpenAPI V3 controller
	I0501 02:19:33.937289       1 controller.go:167] Shutting down OpenAPI controller
	I0501 02:19:33.937419       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:19:33.937516       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0501 02:19:33.937619       1 controller.go:157] Shutting down quota evaluator
	I0501 02:19:33.937636       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:19:33.932967       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0501 02:19:33.932633       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0501 02:19:33.932975       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0501 02:19:33.933014       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0501 02:19:33.933029       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0501 02:19:33.933092       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0501 02:19:33.932643       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0501 02:19:33.932656       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0501 02:19:33.932665       1 establishing_controller.go:87] Shutting down EstablishingController
	I0501 02:19:33.932675       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0501 02:19:33.933038       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0501 02:19:33.933043       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0501 02:19:33.933081       1 secure_serving.go:258] Stopped listening on [::]:8441
	I0501 02:19:33.940207       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:19:33.941508       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:19:33.941544       1 controller.go:176] quota evaluator worker shutdown
	I0501 02:19:33.942342       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [350765a60a82586dd2a69686a601b5d16ad68d05a64cd6e4d3359df1866500b5] <==
	I0501 02:20:05.548426       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 02:20:05.552106       1 shared_informer.go:320] Caches are synced for PV protection
	I0501 02:20:05.557461       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 02:20:05.559188       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0501 02:20:05.560921       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0501 02:20:05.561099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.912µs"
	I0501 02:20:05.565885       1 shared_informer.go:320] Caches are synced for crt configmap
	I0501 02:20:05.569741       1 shared_informer.go:320] Caches are synced for service account
	I0501 02:20:05.578368       1 shared_informer.go:320] Caches are synced for HPA
	I0501 02:20:05.580839       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 02:20:05.583366       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 02:20:05.584712       1 shared_informer.go:320] Caches are synced for GC
	I0501 02:20:05.590141       1 shared_informer.go:320] Caches are synced for persistent volume
	I0501 02:20:05.596584       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 02:20:05.600223       1 shared_informer.go:320] Caches are synced for endpoint
	I0501 02:20:05.602715       1 shared_informer.go:320] Caches are synced for job
	I0501 02:20:05.605865       1 shared_informer.go:320] Caches are synced for deployment
	I0501 02:20:05.608288       1 shared_informer.go:320] Caches are synced for disruption
	I0501 02:20:05.634366       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 02:20:05.663770       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 02:20:05.752163       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:20:05.763685       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:20:06.213812       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:20:06.228479       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:20:06.228527       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ebe11aa9f8804bc05a4bda3b409f51a1cc88f94b5df37d5cfe03725957f8ae54] <==
	I0501 02:19:13.936373       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0501 02:19:13.936390       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0501 02:19:13.940386       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0501 02:19:13.942716       1 shared_informer.go:320] Caches are synced for PVC protection
	I0501 02:19:13.946741       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0501 02:19:13.949349       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.775495ms"
	I0501 02:19:13.950927       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.553µs"
	I0501 02:19:13.969177       1 shared_informer.go:320] Caches are synced for daemon sets
	I0501 02:19:13.975817       1 shared_informer.go:320] Caches are synced for attach detach
	I0501 02:19:13.985573       1 shared_informer.go:320] Caches are synced for TTL
	I0501 02:19:13.986878       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0501 02:19:13.991538       1 shared_informer.go:320] Caches are synced for node
	I0501 02:19:13.991869       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0501 02:19:13.992064       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0501 02:19:13.992201       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0501 02:19:13.992333       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0501 02:19:14.022008       1 shared_informer.go:320] Caches are synced for cronjob
	I0501 02:19:14.035151       1 shared_informer.go:320] Caches are synced for stateful set
	I0501 02:19:14.043403       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:19:14.068572       1 shared_informer.go:320] Caches are synced for resource quota
	I0501 02:19:14.086442       1 shared_informer.go:320] Caches are synced for disruption
	I0501 02:19:14.135817       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0501 02:19:14.567440       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:19:14.602838       1 shared_informer.go:320] Caches are synced for garbage collector
	I0501 02:19:14.602885       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [6df6abb34b88dfeaae1f93d6a23cfc1748633884bc829df09c3047477d7f424c] <==
	I0501 02:19:44.730099       1 server_linux.go:69] "Using iptables proxy"
	E0501 02:19:44.732063       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:19:45.813700       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	E0501 02:19:47.982154       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406\": dial tcp 192.168.39.209:8441: connect: connection refused"
	I0501 02:19:53.031359       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.209"]
	I0501 02:19:53.089991       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0501 02:19:53.090036       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0501 02:19:53.090052       1 server_linux.go:165] "Using iptables Proxier"
	I0501 02:19:53.094508       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0501 02:19:53.095319       1 server.go:872] "Version info" version="v1.30.0"
	I0501 02:19:53.095716       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0501 02:19:53.097123       1 config.go:192] "Starting service config controller"
	I0501 02:19:53.097468       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0501 02:19:53.097670       1 config.go:101] "Starting endpoint slice config controller"
	I0501 02:19:53.097907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0501 02:19:53.098658       1 config.go:319] "Starting node config controller"
	I0501 02:19:53.101299       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0501 02:19:53.198633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:19:53.198675       1 shared_informer.go:320] Caches are synced for service config
	I0501 02:19:53.201407       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f0dc76865d087e0161a301a593a26850b31ed05ae9435c334ce1788dabc1c87f] <==
	I0501 02:18:49.135475       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0501 02:18:49.135542       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.135602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.135935       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.209:8441: connect: connection refused"
	W0501 02:18:49.960987       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:49.961201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:50.247414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:50.247829       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:50.353906       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:50.354334       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.351893       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.352039       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.513544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.513603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:52.774168       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:52.774360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:55.789131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:55.789541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.962943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.962985       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-167406&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:58.352087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:58.352161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	I0501 02:19:06.033778       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0501 02:19:07.236470       1 shared_informer.go:320] Caches are synced for node config
	I0501 02:19:08.934441       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [939e53f1e1db0d9b8fd7696c772cb4e3ca67264b257f339bea87fd1770b13ad3] <==
	E0501 02:18:57.123850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.195323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.195395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.309765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.309834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.470763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.470798       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.772512       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.772548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.804749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.804779       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.886920       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.886982       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.929219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.929386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:57.978490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:57.978527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:18:58.311728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:18:58.311770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:00.939844       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0501 02:19:00.939973       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0501 02:19:01.688744       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0501 02:19:09.088531       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0501 02:19:12.088779       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	E0501 02:19:44.107636       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a513f3286b775a1c5c742fd0ac19b8fa8a6ee5129122ad75de1496bed6278d1f] <==
	W0501 02:19:49.143896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.143978       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.209:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.351289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.351443       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.209:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.596848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.209:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.596882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.209:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.654875       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.654916       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.209:8441/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.674532       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.674621       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.791451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.791485       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.209:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:49.859678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:49.859751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.209:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.074783       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.074851       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.209:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.174913       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.209:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.174963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.209:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.183651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.183678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:50.386329       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	E0501 02:19:50.386369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.209:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.209:8441: connect: connection refused
	W0501 02:19:52.969018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0501 02:19:52.970815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0501 02:19:54.216441       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.227612    4280 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.228840    4280 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.229739    4280 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.230538    4280 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.231593    4280 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: I0501 02:20:13.231641    4280 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.232115    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="200ms"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.433365    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="400ms"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.498599    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.499212    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.499916    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.500701    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.501362    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.501378    4280 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 01 02:20:13 functional-167406 kubelet[4280]: E0501 02:20:13.834926    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="800ms"
	May 01 02:20:14 functional-167406 kubelet[4280]: E0501 02:20:14.637220    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="1.6s"
	May 01 02:20:16 functional-167406 kubelet[4280]: E0501 02:20:16.240104    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="3.2s"
	May 01 02:20:19 functional-167406 kubelet[4280]: E0501 02:20:19.442021    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="6.4s"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.757863    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.758851    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.759904    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.760825    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.761804    4280 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-167406\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused"
	May 01 02:20:23 functional-167406 kubelet[4280]: E0501 02:20:23.761910    4280 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 01 02:20:25 functional-167406 kubelet[4280]: E0501 02:20:25.843312    4280 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-167406?timeout=10s\": dial tcp 192.168.39.209:8441: connect: connection refused" interval="7s"
	
	
	==> storage-provisioner [ae6f4e38ab4f3bee5d7e47c976761288d60a681e7e951889c3578e892750495b] <==
	I0501 02:20:08.757073       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0501 02:20:08.772588       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0501 02:20:08.772654       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0501 02:20:12.228155       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:16.487066       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:20.083198       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:23.134350       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0501 02:20:26.154932       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b8e78e9b1aa3ac1913e84433ca87bbba74b6d0ba8c864704990a43cf8eb77965] <==
	I0501 02:19:54.061102       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0501 02:19:54.064135       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-167406 -n functional-167406
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-167406 -n functional-167406: exit status 2 (12.869429753s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-167406" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (27.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-167406 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1435: (dbg) Non-zero exit: kubectl --context functional-167406 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: exit status 1 (46.984511ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.39.209:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.39.209:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-167406 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 service list: exit status 103 (200.195559ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-167406 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-167406"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-linux-amd64 -p functional-167406 service list" : exit status 103
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-167406 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-167406\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 service list -o json: exit status 103 (190.386783ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-167406 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-167406"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-167406 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 service --namespace=default --https --url hello-node: exit status 103 (184.81604ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-167406 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-167406"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-linux-amd64 -p functional-167406 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 service hello-node --url --format={{.IP}}: exit status 103 (185.062596ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-167406 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-167406"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-167406 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1544: "* The control-plane node functional-167406 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-167406\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 service hello-node --url: exit status 103 (181.08657ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-167406 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-167406"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-linux-amd64 -p functional-167406 service hello-node --url": exit status 103
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-167406 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-167406"
functional_test.go:1565: failed to parse "* The control-plane node functional-167406 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-167406\"": parse "* The control-plane node functional-167406 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-167406\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdany-port205709386/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714530043905066418" to /tmp/TestFunctionalparallelMountCmdany-port205709386/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714530043905066418" to /tmp/TestFunctionalparallelMountCmdany-port205709386/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714530043905066418" to /tmp/TestFunctionalparallelMountCmdany-port205709386/001/test-1714530043905066418
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (216.361687ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May  1 02:20 created-by-test
-rw-r--r-- 1 docker docker 24 May  1 02:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May  1 02:20 test-1714530043905066418
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh cat /mount-9p/test-1714530043905066418
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-167406 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-167406 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (46.361733ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.39.209:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.39.209:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-167406 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (250.068642ms)

                                                
                                                
-- stdout --
	192.168.39.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=1000,access=any,msize=65536,trans=tcp,noextend,port=35605)
	total 2
	-rw-r--r-- 1 docker docker 24 May  1 02:20 created-by-test
	-rw-r--r-- 1 docker docker 24 May  1 02:20 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 May  1 02:20 test-1714530043905066418
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-167406 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdany-port205709386/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdany-port205709386/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port205709386/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.39.1:35605
* Userspace file server: ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port205709386/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdany-port205709386/001:/mount-9p --alsologtostderr -v=1] stderr:
I0501 02:20:43.962943   29316 out.go:291] Setting OutFile to fd 1 ...
I0501 02:20:43.963114   29316 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:43.963125   29316 out.go:304] Setting ErrFile to fd 2...
I0501 02:20:43.963133   29316 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:43.963444   29316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
I0501 02:20:43.963764   29316 mustload.go:65] Loading cluster: functional-167406
I0501 02:20:43.964250   29316 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:43.964797   29316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:43.964851   29316 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:43.980786   29316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32919
I0501 02:20:43.981253   29316 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:43.981985   29316 main.go:141] libmachine: Using API Version  1
I0501 02:20:43.982008   29316 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:43.982347   29316 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:43.982518   29316 main.go:141] libmachine: (functional-167406) Calling .GetState
I0501 02:20:43.984327   29316 host.go:66] Checking if "functional-167406" exists ...
I0501 02:20:43.984702   29316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:43.984732   29316 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:43.999770   29316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
I0501 02:20:44.000246   29316 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:44.000903   29316 main.go:141] libmachine: Using API Version  1
I0501 02:20:44.000937   29316 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:44.001242   29316 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:44.001430   29316 main.go:141] libmachine: (functional-167406) Calling .DriverName
I0501 02:20:44.001567   29316 main.go:141] libmachine: (functional-167406) Calling .DriverName
I0501 02:20:44.001683   29316 main.go:141] libmachine: (functional-167406) Calling .GetIP
I0501 02:20:44.004522   29316 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:44.005110   29316 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
I0501 02:20:44.005135   29316 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:44.005641   29316 main.go:141] libmachine: (functional-167406) Calling .DriverName
I0501 02:20:44.008616   29316 out.go:177] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port205709386/001 into VM as /mount-9p ...
I0501 02:20:44.010228   29316 out.go:177]   - Mount type:   9p
I0501 02:20:44.011786   29316 out.go:177]   - User ID:      docker
I0501 02:20:44.013076   29316 out.go:177]   - Group ID:     docker
I0501 02:20:44.014291   29316 out.go:177]   - Version:      9p2000.L
I0501 02:20:44.015808   29316 out.go:177]   - Message Size: 262144
I0501 02:20:44.017078   29316 out.go:177]   - Options:      map[]
I0501 02:20:44.018857   29316 out.go:177]   - Bind Address: 192.168.39.1:35605
I0501 02:20:44.020125   29316 out.go:177] * Userspace file server: 
I0501 02:20:44.020276   29316 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0501 02:20:44.021379   29316 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
I0501 02:20:44.024270   29316 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:44.024644   29316 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
I0501 02:20:44.024686   29316 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:44.024827   29316 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
I0501 02:20:44.025002   29316 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
I0501 02:20:44.025172   29316 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
I0501 02:20:44.025347   29316 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
I0501 02:20:44.117831   29316 mount.go:180] unmount for /mount-9p ran successfully
I0501 02:20:44.117853   29316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I0501 02:20:44.130539   29316 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=35605,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p"
I0501 02:20:44.164641   29316 main.go:125] stdlog: ufs.go:141 connected
I0501 02:20:44.165596   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tversion tag 65535 msize 65536 version '9P2000.L'
I0501 02:20:44.165658   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rversion tag 65535 msize 65536 version '9P2000'
I0501 02:20:44.166712   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0501 02:20:44.166803   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rattach tag 0 aqid (20fa090 31f447fb 'd')
I0501 02:20:44.167415   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 0
I0501 02:20:44.167528   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa090 31f447fb 'd') m d775 at 0 mt 1714530043 l 4096 t 0 d 0 ext )
I0501 02:20:44.172278   29316 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/.mount-process: {Name:mk7ac05baf5ddcf9e22c2e24c3b512ab186eb700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0501 02:20:44.172488   29316 mount.go:105] mount successful: ""
I0501 02:20:44.174679   29316 out.go:177] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port205709386/001 to /mount-9p
I0501 02:20:44.176125   29316 out.go:177] 
I0501 02:20:44.177395   29316 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
I0501 02:20:45.001898   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 0
I0501 02:20:45.002040   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa090 31f447fb 'd') m d775 at 0 mt 1714530043 l 4096 t 0 d 0 ext )
I0501 02:20:45.004239   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Twalk tag 0 fid 0 newfid 1 
I0501 02:20:45.004286   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rwalk tag 0 
I0501 02:20:45.004523   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Topen tag 0 fid 1 mode 0
I0501 02:20:45.004582   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Ropen tag 0 qid (20fa090 31f447fb 'd') iounit 0
I0501 02:20:45.004784   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 0
I0501 02:20:45.004867   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa090 31f447fb 'd') m d775 at 0 mt 1714530043 l 4096 t 0 d 0 ext )
I0501 02:20:45.005102   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tread tag 0 fid 1 offset 0 count 65512
I0501 02:20:45.005262   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rread tag 0 count 258
I0501 02:20:45.005479   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tread tag 0 fid 1 offset 258 count 65254
I0501 02:20:45.005509   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rread tag 0 count 0
I0501 02:20:45.005755   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tread tag 0 fid 1 offset 258 count 65512
I0501 02:20:45.005784   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rread tag 0 count 0
I0501 02:20:45.006003   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Twalk tag 0 fid 0 newfid 2 0:'test-1714530043905066418' 
I0501 02:20:45.006037   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rwalk tag 0 (20fa093 31f447fb '') 
I0501 02:20:45.006236   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.006314   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('test-1714530043905066418' 'jenkins' 'balintp' '' q (20fa093 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.006541   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.006633   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('test-1714530043905066418' 'jenkins' 'balintp' '' q (20fa093 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.006859   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tclunk tag 0 fid 2
I0501 02:20:45.006906   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rclunk tag 0
I0501 02:20:45.007271   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0501 02:20:45.007337   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rwalk tag 0 (20fa092 31f447fb '') 
I0501 02:20:45.007831   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.007949   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa092 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.008192   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.008283   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa092 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.009213   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tclunk tag 0 fid 2
I0501 02:20:45.009250   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rclunk tag 0
I0501 02:20:45.009642   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0501 02:20:45.009685   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rwalk tag 0 (20fa091 31f447fb '') 
I0501 02:20:45.009909   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.009999   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa091 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.010298   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.010387   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa091 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.010728   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tclunk tag 0 fid 2
I0501 02:20:45.010757   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rclunk tag 0
I0501 02:20:45.010992   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tread tag 0 fid 1 offset 258 count 65512
I0501 02:20:45.011024   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rread tag 0 count 0
I0501 02:20:45.011305   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tclunk tag 0 fid 1
I0501 02:20:45.011340   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rclunk tag 0
I0501 02:20:45.214165   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Twalk tag 0 fid 0 newfid 1 0:'test-1714530043905066418' 
I0501 02:20:45.214256   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rwalk tag 0 (20fa093 31f447fb '') 
I0501 02:20:45.214490   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 1
I0501 02:20:45.214607   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('test-1714530043905066418' 'jenkins' 'balintp' '' q (20fa093 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.214806   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Twalk tag 0 fid 1 newfid 2 
I0501 02:20:45.214838   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rwalk tag 0 
I0501 02:20:45.215059   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Topen tag 0 fid 2 mode 0
I0501 02:20:45.215161   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Ropen tag 0 qid (20fa093 31f447fb '') iounit 0
I0501 02:20:45.215360   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 1
I0501 02:20:45.215469   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('test-1714530043905066418' 'jenkins' 'balintp' '' q (20fa093 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.215752   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tread tag 0 fid 2 offset 0 count 65512
I0501 02:20:45.215809   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rread tag 0 count 24
I0501 02:20:45.216026   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tread tag 0 fid 2 offset 24 count 65512
I0501 02:20:45.216081   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rread tag 0 count 0
I0501 02:20:45.216354   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tread tag 0 fid 2 offset 24 count 65512
I0501 02:20:45.216410   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rread tag 0 count 0
I0501 02:20:45.216873   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tclunk tag 0 fid 2
I0501 02:20:45.216915   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rclunk tag 0
I0501 02:20:45.217353   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tclunk tag 0 fid 1
I0501 02:20:45.217394   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rclunk tag 0
I0501 02:20:45.508275   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 0
I0501 02:20:45.508467   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa090 31f447fb 'd') m d775 at 0 mt 1714530043 l 4096 t 0 d 0 ext )
I0501 02:20:45.509899   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Twalk tag 0 fid 0 newfid 1 
I0501 02:20:45.509939   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rwalk tag 0 
I0501 02:20:45.510134   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Topen tag 0 fid 1 mode 0
I0501 02:20:45.510256   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Ropen tag 0 qid (20fa090 31f447fb 'd') iounit 0
I0501 02:20:45.510437   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 0
I0501 02:20:45.510526   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa090 31f447fb 'd') m d775 at 0 mt 1714530043 l 4096 t 0 d 0 ext )
I0501 02:20:45.511054   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tread tag 0 fid 1 offset 0 count 65512
I0501 02:20:45.511269   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rread tag 0 count 258
I0501 02:20:45.511451   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tread tag 0 fid 1 offset 258 count 65254
I0501 02:20:45.511494   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rread tag 0 count 0
I0501 02:20:45.511746   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tread tag 0 fid 1 offset 258 count 65512
I0501 02:20:45.511781   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rread tag 0 count 0
I0501 02:20:45.512041   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Twalk tag 0 fid 0 newfid 2 0:'test-1714530043905066418' 
I0501 02:20:45.512082   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rwalk tag 0 (20fa093 31f447fb '') 
I0501 02:20:45.512264   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.512356   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('test-1714530043905066418' 'jenkins' 'balintp' '' q (20fa093 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.512591   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.512694   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('test-1714530043905066418' 'jenkins' 'balintp' '' q (20fa093 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.513145   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tclunk tag 0 fid 2
I0501 02:20:45.513177   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rclunk tag 0
I0501 02:20:45.513421   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0501 02:20:45.513468   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rwalk tag 0 (20fa092 31f447fb '') 
I0501 02:20:45.513766   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.513838   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa092 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.514025   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.514122   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa092 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.514850   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tclunk tag 0 fid 2
I0501 02:20:45.514876   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rclunk tag 0
I0501 02:20:45.515063   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0501 02:20:45.515112   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rwalk tag 0 (20fa091 31f447fb '') 
I0501 02:20:45.515261   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.515336   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa091 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.515484   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tstat tag 0 fid 2
I0501 02:20:45.515559   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa091 31f447fb '') m 644 at 0 mt 1714530043 l 24 t 0 d 0 ext )
I0501 02:20:45.515729   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tclunk tag 0 fid 2
I0501 02:20:45.515754   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rclunk tag 0
I0501 02:20:45.515899   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tread tag 0 fid 1 offset 258 count 65512
I0501 02:20:45.515934   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rread tag 0 count 0
I0501 02:20:45.516065   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tclunk tag 0 fid 1
I0501 02:20:45.516095   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rclunk tag 0
I0501 02:20:45.518341   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I0501 02:20:45.518406   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rerror tag 0 ename 'file not found' ecode 0
I0501 02:20:45.728315   29316 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.39.209:42862 Tclunk tag 0 fid 0
I0501 02:20:45.728396   29316 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.39.209:42862 Rclunk tag 0
I0501 02:20:45.728991   29316 main.go:125] stdlog: ufs.go:147 disconnected
I0501 02:20:45.946672   29316 out.go:177] * Unmounting /mount-9p ...
I0501 02:20:45.948359   29316 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0501 02:20:45.959467   29316 mount.go:180] unmount for /mount-9p ran successfully
I0501 02:20:45.959579   29316 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/.mount-process: {Name:mk7ac05baf5ddcf9e22c2e24c3b512ab186eb700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0501 02:20:45.961353   29316 out.go:177] 
W0501 02:20:45.963937   29316 out.go:239] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I0501 02:20:45.965404   29316 out.go:177] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.14s)

                                                
                                    

Test pass (276/325)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 75.27
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.0/json-events 22.83
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.13
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.58
22 TestOffline 100.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 214.1
29 TestAddons/parallel/Registry 17.11
30 TestAddons/parallel/Ingress 27.13
31 TestAddons/parallel/InspektorGadget 12.27
32 TestAddons/parallel/MetricsServer 5.98
33 TestAddons/parallel/HelmTiller 15.26
35 TestAddons/parallel/CSI 66.2
36 TestAddons/parallel/Headlamp 42.3
37 TestAddons/parallel/CloudSpanner 6.69
38 TestAddons/parallel/LocalPath 14.17
39 TestAddons/parallel/NvidiaDevicePlugin 6.66
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.12
44 TestAddons/StoppedEnableDisable 92.72
45 TestCertOptions 62.51
46 TestCertExpiration 274.63
48 TestForceSystemdFlag 106.26
49 TestForceSystemdEnv 48.35
51 TestKVMDriverInstallOrUpdate 13.26
55 TestErrorSpam/setup 45.87
56 TestErrorSpam/start 0.36
57 TestErrorSpam/status 0.75
58 TestErrorSpam/pause 1.66
59 TestErrorSpam/unpause 1.7
60 TestErrorSpam/stop 4.95
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 98.35
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 44.16
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.78
72 TestFunctional/serial/CacheCmd/cache/add_local 3.07
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.12
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 40.12
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.62
83 TestFunctional/serial/LogsFileCmd 1.57
86 TestFunctional/parallel/ConfigCmd 0.37
88 TestFunctional/parallel/DryRun 0.29
89 TestFunctional/parallel/InternationalLanguage 0.14
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 122.06
98 TestFunctional/parallel/SSHCmd 0.43
99 TestFunctional/parallel/CpCmd 1.25
101 TestFunctional/parallel/FileSync 0.22
102 TestFunctional/parallel/CertSync 1.27
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
110 TestFunctional/parallel/License 0.82
111 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
112 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
113 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
114 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
115 TestFunctional/parallel/ImageCommands/ImageBuild 4.58
116 TestFunctional/parallel/ImageCommands/Setup 3.28
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.09
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.8
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.18
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.97
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.44
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.02
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.28
143 TestFunctional/parallel/ProfileCmd/profile_list 0.27
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
146 TestFunctional/parallel/MountCmd/specific-port 1.59
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
148 TestFunctional/parallel/Version/short 0.06
149 TestFunctional/parallel/Version/components 0.55
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 280.88
157 TestMultiControlPlane/serial/DeployApp 7.16
158 TestMultiControlPlane/serial/PingHostFromPods 1.31
159 TestMultiControlPlane/serial/AddWorkerNode 48.56
160 TestMultiControlPlane/serial/NodeLabels 0.06
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
162 TestMultiControlPlane/serial/CopyFile 13.58
163 TestMultiControlPlane/serial/StopSecondaryNode 92.45
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.41
165 TestMultiControlPlane/serial/RestartSecondaryNode 45.37
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.54
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 475.05
168 TestMultiControlPlane/serial/DeleteSecondaryNode 7.15
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
170 TestMultiControlPlane/serial/StopCluster 275.75
171 TestMultiControlPlane/serial/RestartCluster 158.83
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
173 TestMultiControlPlane/serial/AddSecondaryNode 71.29
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
178 TestJSONOutput/start/Command 61.54
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.74
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.64
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.35
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.06
207 TestMinikubeProfile 95.91
210 TestMountStart/serial/StartWithMountFirst 30.89
211 TestMountStart/serial/VerifyMountFirst 0.55
212 TestMountStart/serial/StartWithMountSecond 30.43
213 TestMountStart/serial/VerifyMountSecond 0.38
214 TestMountStart/serial/DeleteFirst 0.95
215 TestMountStart/serial/VerifyMountPostDelete 0.39
216 TestMountStart/serial/Stop 1.37
217 TestMountStart/serial/RestartStopped 26.37
218 TestMountStart/serial/VerifyMountPostStop 0.39
221 TestMultiNode/serial/FreshStart2Nodes 106.36
222 TestMultiNode/serial/DeployApp2Nodes 5.87
223 TestMultiNode/serial/PingHostFrom2Pods 0.85
224 TestMultiNode/serial/AddNode 46.75
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.23
227 TestMultiNode/serial/CopyFile 7.46
228 TestMultiNode/serial/StopNode 2.35
229 TestMultiNode/serial/StartAfterStop 26.4
230 TestMultiNode/serial/RestartKeepsNodes 303.2
231 TestMultiNode/serial/DeleteNode 2.15
232 TestMultiNode/serial/StopMultiNode 184.08
233 TestMultiNode/serial/RestartMultiNode 83
234 TestMultiNode/serial/ValidateNameConflict 48.67
239 TestPreload 270.35
241 TestScheduledStopUnix 116.61
245 TestRunningBinaryUpgrade 211.04
247 TestKubernetesUpgrade 222.83
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestPause/serial/Start 83.91
260 TestNoKubernetes/serial/StartWithK8s 97.91
261 TestPause/serial/SecondStartNoReconfiguration 67.06
262 TestNoKubernetes/serial/StartWithStopK8s 22.28
263 TestNoKubernetes/serial/Start 37.12
264 TestPause/serial/Pause 0.77
265 TestPause/serial/VerifyStatus 0.26
266 TestPause/serial/Unpause 0.79
267 TestPause/serial/PauseAgain 0.99
268 TestPause/serial/DeletePaused 0.84
269 TestPause/serial/VerifyDeletedResources 4.81
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
271 TestNoKubernetes/serial/ProfileList 2.85
272 TestNoKubernetes/serial/Stop 1.78
273 TestNoKubernetes/serial/StartNoArgs 47.51
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
282 TestNetworkPlugins/group/false 3.47
283 TestStoppedBinaryUpgrade/Setup 3.16
284 TestStoppedBinaryUpgrade/Upgrade 172.24
289 TestStartStop/group/old-k8s-version/serial/FirstStart 205.06
291 TestStartStop/group/no-preload/serial/FirstStart 117.59
292 TestStoppedBinaryUpgrade/MinikubeLogs 1.6
294 TestStartStop/group/embed-certs/serial/FirstStart 112.04
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.77
297 TestStartStop/group/embed-certs/serial/DeployApp 10.31
298 TestStartStop/group/no-preload/serial/DeployApp 11.32
299 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
300 TestStartStop/group/embed-certs/serial/Stop 92.5
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
302 TestStartStop/group/no-preload/serial/Stop 92.49
303 TestStartStop/group/old-k8s-version/serial/DeployApp 10.46
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
305 TestStartStop/group/old-k8s-version/serial/Stop 92.47
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.48
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
310 TestStartStop/group/embed-certs/serial/SecondStart 295.93
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/no-preload/serial/SecondStart 332.13
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
314 TestStartStop/group/old-k8s-version/serial/SecondStart 485.63
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.37
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 333.64
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
319 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
320 TestStartStop/group/embed-certs/serial/Pause 2.95
322 TestStartStop/group/newest-cni/serial/FirstStart 61.6
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 17.01
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
326 TestStartStop/group/no-preload/serial/Pause 3.12
327 TestNetworkPlugins/group/auto/Start 101.19
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.28
330 TestStartStop/group/newest-cni/serial/Stop 7.37
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
332 TestStartStop/group/newest-cni/serial/SecondStart 41.94
333 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 18.01
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.16
337 TestNetworkPlugins/group/kindnet/Start 69.63
338 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
341 TestStartStop/group/newest-cni/serial/Pause 2.91
342 TestNetworkPlugins/group/calico/Start 119.79
343 TestNetworkPlugins/group/auto/KubeletFlags 0.22
344 TestNetworkPlugins/group/auto/NetCatPod 10.26
345 TestNetworkPlugins/group/auto/DNS 0.19
346 TestNetworkPlugins/group/auto/Localhost 0.21
347 TestNetworkPlugins/group/auto/HairPin 0.16
348 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
349 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
350 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
351 TestNetworkPlugins/group/custom-flannel/Start 85.58
352 TestNetworkPlugins/group/kindnet/DNS 0.17
353 TestNetworkPlugins/group/kindnet/Localhost 0.15
354 TestNetworkPlugins/group/kindnet/HairPin 0.15
355 TestNetworkPlugins/group/enable-default-cni/Start 104.91
356 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.3
357 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
358 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
359 TestStartStop/group/old-k8s-version/serial/Pause 3.31
360 TestNetworkPlugins/group/flannel/Start 102.69
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/calico/KubeletFlags 0.29
363 TestNetworkPlugins/group/calico/NetCatPod 11.3
364 TestNetworkPlugins/group/calico/DNS 0.17
365 TestNetworkPlugins/group/calico/Localhost 0.13
366 TestNetworkPlugins/group/calico/HairPin 0.13
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.18
369 TestNetworkPlugins/group/bridge/Start 103.12
370 TestNetworkPlugins/group/custom-flannel/DNS 0.19
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
373 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
374 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.27
375 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
376 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
377 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
380 TestNetworkPlugins/group/flannel/NetCatPod 9.34
381 TestNetworkPlugins/group/flannel/DNS 0.15
382 TestNetworkPlugins/group/flannel/Localhost 0.12
383 TestNetworkPlugins/group/flannel/HairPin 0.13
384 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
385 TestNetworkPlugins/group/bridge/NetCatPod 9.24
386 TestNetworkPlugins/group/bridge/DNS 0.15
387 TestNetworkPlugins/group/bridge/Localhost 0.14
388 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (75.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-427514 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-427514 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (1m15.2697397s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (75.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-427514
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-427514: exit status 85 (77.874238ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-427514 | jenkins | v1.33.0 | 01 May 24 02:07 UTC |          |
	|         | -p download-only-427514        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:07:25
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:07:25.077598   20797 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:07:25.077717   20797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:07:25.077728   20797 out.go:304] Setting ErrFile to fd 2...
	I0501 02:07:25.077734   20797 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:07:25.077933   20797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	W0501 02:07:25.078068   20797 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18779-13407/.minikube/config/config.json: open /home/jenkins/minikube-integration/18779-13407/.minikube/config/config.json: no such file or directory
	I0501 02:07:25.078631   20797 out.go:298] Setting JSON to true
	I0501 02:07:25.079504   20797 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2987,"bootTime":1714526258,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:07:25.079558   20797 start.go:139] virtualization: kvm guest
	I0501 02:07:25.082204   20797 out.go:97] [download-only-427514] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:07:25.083988   20797 out.go:169] MINIKUBE_LOCATION=18779
	W0501 02:07:25.082361   20797 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18779-13407/.minikube/cache/preloaded-tarball: no such file or directory
	I0501 02:07:25.082407   20797 notify.go:220] Checking for updates...
	I0501 02:07:25.085527   20797 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:07:25.087014   20797 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:07:25.088488   20797 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	I0501 02:07:25.089952   20797 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0501 02:07:25.092665   20797 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0501 02:07:25.092909   20797 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:07:25.193332   20797 out.go:97] Using the kvm2 driver based on user configuration
	I0501 02:07:25.193360   20797 start.go:297] selected driver: kvm2
	I0501 02:07:25.193367   20797 start.go:901] validating driver "kvm2" against <nil>
	I0501 02:07:25.193707   20797 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:07:25.193821   20797 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13407/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:07:25.208240   20797 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:07:25.208304   20797 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:07:25.208764   20797 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0501 02:07:25.208937   20797 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0501 02:07:25.209012   20797 cni.go:84] Creating CNI manager for ""
	I0501 02:07:25.209029   20797 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:07:25.209039   20797 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 02:07:25.209106   20797 start.go:340] cluster config:
	{Name:download-only-427514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-427514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:07:25.209380   20797 iso.go:125] acquiring lock: {Name:mk2f0fca3713b9e2ec58748a6d2af30df1faa5ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:07:25.211409   20797 out.go:97] Downloading VM boot image ...
	I0501 02:07:25.211459   20797 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18779-13407/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso
	I0501 02:07:43.241652   20797 out.go:97] Starting "download-only-427514" primary control-plane node in "download-only-427514" cluster
	I0501 02:07:43.241681   20797 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0501 02:07:43.400442   20797 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0501 02:07:43.400484   20797 cache.go:56] Caching tarball of preloaded images
	I0501 02:07:43.400754   20797 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0501 02:07:43.402799   20797 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0501 02:07:43.402819   20797 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0501 02:07:43.556420   20797 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18779-13407/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0501 02:08:04.988349   20797 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0501 02:08:04.988439   20797 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18779-13407/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0501 02:08:05.882925   20797 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0501 02:08:05.883290   20797 profile.go:143] Saving config to /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/download-only-427514/config.json ...
	I0501 02:08:05.883318   20797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/download-only-427514/config.json: {Name:mk7e649e0725c8a95353933c57d6b49660d5fdd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0501 02:08:05.883459   20797 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0501 02:08:05.883638   20797 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18779-13407/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-427514 host does not exist
	  To start a cluster, run: "minikube start -p download-only-427514"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-427514
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (22.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-551237 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-551237 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (22.833598779s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (22.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-551237
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-551237: exit status 85 (71.556832ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-427514 | jenkins | v1.33.0 | 01 May 24 02:07 UTC |                     |
	|         | -p download-only-427514        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 01 May 24 02:08 UTC | 01 May 24 02:08 UTC |
	| delete  | -p download-only-427514        | download-only-427514 | jenkins | v1.33.0 | 01 May 24 02:08 UTC | 01 May 24 02:08 UTC |
	| start   | -o=json --download-only        | download-only-551237 | jenkins | v1.33.0 | 01 May 24 02:08 UTC |                     |
	|         | -p download-only-551237        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/01 02:08:40
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0501 02:08:40.688220   21207 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:08:40.688310   21207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:08:40.688319   21207 out.go:304] Setting ErrFile to fd 2...
	I0501 02:08:40.688323   21207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:08:40.688517   21207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 02:08:40.689048   21207 out.go:298] Setting JSON to true
	I0501 02:08:40.689895   21207 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3063,"bootTime":1714526258,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:08:40.689951   21207 start.go:139] virtualization: kvm guest
	I0501 02:08:40.692258   21207 out.go:97] [download-only-551237] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:08:40.693953   21207 out.go:169] MINIKUBE_LOCATION=18779
	I0501 02:08:40.692449   21207 notify.go:220] Checking for updates...
	I0501 02:08:40.696620   21207 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:08:40.697969   21207 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:08:40.699296   21207 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	I0501 02:08:40.700700   21207 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0501 02:08:40.703142   21207 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0501 02:08:40.703383   21207 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:08:40.734784   21207 out.go:97] Using the kvm2 driver based on user configuration
	I0501 02:08:40.734826   21207 start.go:297] selected driver: kvm2
	I0501 02:08:40.734831   21207 start.go:901] validating driver "kvm2" against <nil>
	I0501 02:08:40.735171   21207 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:08:40.735254   21207 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18779-13407/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0501 02:08:40.749468   21207 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0501 02:08:40.749520   21207 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0501 02:08:40.749989   21207 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0501 02:08:40.750136   21207 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0501 02:08:40.750196   21207 cni.go:84] Creating CNI manager for ""
	I0501 02:08:40.750214   21207 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0501 02:08:40.750226   21207 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0501 02:08:40.750294   21207 start.go:340] cluster config:
	{Name:download-only-551237 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-551237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:08:40.750390   21207 iso.go:125] acquiring lock: {Name:mk2f0fca3713b9e2ec58748a6d2af30df1faa5ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0501 02:08:40.752390   21207 out.go:97] Starting "download-only-551237" primary control-plane node in "download-only-551237" cluster
	I0501 02:08:40.752419   21207 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0501 02:08:40.909262   21207 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	I0501 02:08:40.909305   21207 cache.go:56] Caching tarball of preloaded images
	I0501 02:08:40.909453   21207 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime containerd
	I0501 02:08:40.911537   21207 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0501 02:08:40.911571   21207 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4 ...
	I0501 02:08:41.065872   21207 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:3a7aac5052a5448f24921f55001543e6 -> /home/jenkins/minikube-integration/18779-13407/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-551237 host does not exist
	  To start a cluster, run: "minikube start -p download-only-551237"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-551237
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-874803 --alsologtostderr --binary-mirror http://127.0.0.1:33203 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-874803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-874803
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (100.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-744919 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-744919 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m39.576140041s)
helpers_test.go:175: Cleaning up "offline-containerd-744919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-744919
--- PASS: TestOffline (100.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-753721
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-753721: exit status 85 (74.962522ms)

                                                
                                                
-- stdout --
	* Profile "addons-753721" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-753721"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-753721
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-753721: exit status 85 (72.292785ms)

                                                
                                                
-- stdout --
	* Profile "addons-753721" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-753721"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (214.1s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-753721 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-753721 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m34.099741681s)
--- PASS: TestAddons/Setup (214.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 15.244378ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9f5s9" [acf26737-6f84-44f3-9d70-94a29f97d465] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007086195s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tlqjr" [fb50146b-ba25-474a-a222-eeeb6c9e6e8f] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009190947s
addons_test.go:340: (dbg) Run:  kubectl --context addons-753721 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-753721 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-753721 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.250879459s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 ip
2024/05/01 02:12:55 [DEBUG] GET http://192.168.39.160:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (27.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-753721 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-753721 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-753721 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [116e8450-0d22-419d-9d01-aabfcef62dfc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [116e8450-0d22-419d-9d01-aabfcef62dfc] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.004721953s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-753721 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.160
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-753721 addons disable ingress-dns --alsologtostderr -v=1: (1.135829595s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-753721 addons disable ingress --alsologtostderr -v=1: (7.736356304s)
--- PASS: TestAddons/parallel/Ingress (27.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lb4g9" [88f8c007-07d5-4d74-8aa8-d69bd5ec5ded] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004044728s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-753721
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-753721: (6.263939618s)
--- PASS: TestAddons/parallel/InspektorGadget (12.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.540117ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-ghxcm" [fc56f858-0638-4027-a82a-39ea86ca30fc] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006051827s
addons_test.go:415: (dbg) Run:  kubectl --context addons-753721 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.98s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.26s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 2.69105ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-vwlfh" [289182a2-d800-42db-aaee-23ef81583da0] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.007799853s
addons_test.go:473: (dbg) Run:  kubectl --context addons-753721 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-753721 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.642078738s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 6.597573ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-753721 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-753721 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2ef1de88-aa2f-49dc-b54c-25880501796d] Pending
helpers_test.go:344: "task-pv-pod" [2ef1de88-aa2f-49dc-b54c-25880501796d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2ef1de88-aa2f-49dc-b54c-25880501796d] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 34.00446063s
addons_test.go:584: (dbg) Run:  kubectl --context addons-753721 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-753721 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-753721 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-753721 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-753721 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-753721 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-753721 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e803a821-52e0-4e31-b859-5b67a22f3ca7] Pending
helpers_test.go:344: "task-pv-pod-restore" [e803a821-52e0-4e31-b859-5b67a22f3ca7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e803a821-52e0-4e31-b859-5b67a22f3ca7] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004241186s
addons_test.go:626: (dbg) Run:  kubectl --context addons-753721 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-753721 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-753721 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-753721 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.838116217s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (66.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (42.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-753721 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-753721 --alsologtostderr -v=1: (1.299449537s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-n66zj" [c90a6710-5612-4019-9cdb-be2fc6124087] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-n66zj" [c90a6710-5612-4019-9cdb-be2fc6124087] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 41.003638896s
--- PASS: TestAddons/parallel/Headlamp (42.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-2mgnw" [f0ffd720-7e57-409b-b708-5f35280b21ff] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004176188s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-753721
--- PASS: TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (14.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-753721 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-753721 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-753721 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f566db94-508e-48de-8f2b-0632f39c032a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f566db94-508e-48de-8f2b-0632f39c032a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f566db94-508e-48de-8f2b-0632f39c032a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004543277s
addons_test.go:891: (dbg) Run:  kubectl --context addons-753721 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 ssh "cat /opt/local-path-provisioner/pvc-8a6c4dba-fa9e-4067-a98b-a0439e5bc351_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-753721 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-753721 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-753721 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (14.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sn8mh" [48dcb55f-2d38-4ccb-a382-1ee5246a8778] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008015926s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-753721
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-26h4g" [fdab3c42-4361-4991-80c1-037121c7c02d] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.008009675s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-753721 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-753721 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-753721
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-753721: (1m32.433452576s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-753721
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-753721
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-753721
--- PASS: TestAddons/StoppedEnableDisable (92.72s)

                                                
                                    
x
+
TestCertOptions (62.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-291115 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-291115 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m0.924018917s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-291115 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-291115 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-291115 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-291115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-291115
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-291115: (1.099158229s)
--- PASS: TestCertOptions (62.51s)

                                                
                                    
x
+
TestCertExpiration (274.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-851194 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-851194 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m19.97158658s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-851194 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-851194 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (13.644349641s)
helpers_test.go:175: Cleaning up "cert-expiration-851194" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-851194
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-851194: (1.016017968s)
--- PASS: TestCertExpiration (274.63s)

                                                
                                    
x
+
TestForceSystemdFlag (106.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-820886 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0501 03:15:16.315002   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-820886 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m45.283905823s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-820886 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-820886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-820886
--- PASS: TestForceSystemdFlag (106.26s)

                                                
                                    
x
+
TestForceSystemdEnv (48.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-921796 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-921796 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (46.346891152s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-921796 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-921796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-921796
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-921796: (1.778471183s)
--- PASS: TestForceSystemdEnv (48.35s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (13.26s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0501 03:14:59.361058   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (13.26s)

                                                
                                    
x
+
TestErrorSpam/setup (45.87s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-913645 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-913645 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-913645 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-913645 --driver=kvm2  --container-runtime=containerd: (45.866789147s)
--- PASS: TestErrorSpam/setup (45.87s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (4.95s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 stop: (1.614501694s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 stop: (1.351107413s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-913645 --log_dir /tmp/nospam-913645 stop: (1.987121598s)
--- PASS: TestErrorSpam/stop (4.95s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18779-13407/.minikube/files/etc/test/nested/copy/20785/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167406 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0501 02:17:38.815323   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:38.821023   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:38.831307   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:38.851597   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:38.891911   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:38.972228   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:39.132625   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:39.453197   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:40.094074   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:41.374321   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:43.936086   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:49.056854   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:17:59.297133   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:18:19.777669   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-167406 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m38.346425607s)
--- PASS: TestFunctional/serial/StartWithProxy (98.35s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167406 --alsologtostderr -v=8
E0501 02:19:00.738377   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-167406 --alsologtostderr -v=8: (44.162169133s)
functional_test.go:659: soft start took 44.16284212s for "functional-167406" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-167406 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 cache add registry.k8s.io/pause:3.1: (1.250641321s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 cache add registry.k8s.io/pause:3.3: (1.313831949s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 cache add registry.k8s.io/pause:latest: (1.215274278s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-167406 /tmp/TestFunctionalserialCacheCmdcacheadd_local2849631132/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 cache add minikube-local-cache-test:functional-167406
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 cache add minikube-local-cache-test:functional-167406: (2.715582811s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 cache delete minikube-local-cache-test:functional-167406
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-167406
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (222.295991ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 cache reload: (1.124368739s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 kubectl -- --context functional-167406 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-167406 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167406 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-167406 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.116943573s)
functional_test.go:757: restart took 40.117109249s for "functional-167406" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-167406 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 logs: (1.619741087s)
--- PASS: TestFunctional/serial/LogsCmd (1.62s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 logs --file /tmp/TestFunctionalserialLogsFileCmd3855886406/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 logs --file /tmp/TestFunctionalserialLogsFileCmd3855886406/001/logs.txt: (1.573594118s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 config get cpus: exit status 14 (55.382381ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 config get cpus: exit status 14 (54.733756ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167406 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-167406 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (155.032914ms)

                                                
                                                
-- stdout --
	* [functional-167406] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:20:46.966336   29869 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:20:46.966965   29869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:20:46.967025   29869 out.go:304] Setting ErrFile to fd 2...
	I0501 02:20:46.967044   29869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:20:46.967638   29869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 02:20:46.968737   29869 out.go:298] Setting JSON to false
	I0501 02:20:46.969749   29869 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3789,"bootTime":1714526258,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:20:46.969813   29869 start.go:139] virtualization: kvm guest
	I0501 02:20:46.971775   29869 out.go:177] * [functional-167406] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 02:20:46.973519   29869 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:20:46.974849   29869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:20:46.973521   29869 notify.go:220] Checking for updates...
	I0501 02:20:46.977529   29869 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:20:46.979033   29869 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	I0501 02:20:46.980305   29869 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:20:46.981607   29869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:20:46.983355   29869 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:20:46.983734   29869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:46.983772   29869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:46.998219   29869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33939
	I0501 02:20:46.998615   29869 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:46.999223   29869 main.go:141] libmachine: Using API Version  1
	I0501 02:20:46.999246   29869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:46.999550   29869 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:46.999756   29869 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:46.999988   29869 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:20:47.000257   29869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:47.000288   29869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:47.014754   29869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44605
	I0501 02:20:47.015151   29869 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:47.015664   29869 main.go:141] libmachine: Using API Version  1
	I0501 02:20:47.015693   29869 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:47.016054   29869 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:47.016251   29869 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:47.049160   29869 out.go:177] * Using the kvm2 driver based on existing profile
	I0501 02:20:47.050482   29869 start.go:297] selected driver: kvm2
	I0501 02:20:47.050498   29869 start.go:901] validating driver "kvm2" against &{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:20:47.050623   29869 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:20:47.052723   29869 out.go:177] 
	W0501 02:20:47.053995   29869 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0501 02:20:47.055240   29869 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167406 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-167406 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-167406 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (141.521236ms)

                                                
                                                
-- stdout --
	* [functional-167406] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:20:46.814345   29817 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:20:46.814429   29817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:20:46.814436   29817 out.go:304] Setting ErrFile to fd 2...
	I0501 02:20:46.814440   29817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:20:46.814747   29817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 02:20:46.815244   29817 out.go:298] Setting JSON to false
	I0501 02:20:46.816184   29817 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3789,"bootTime":1714526258,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 02:20:46.816247   29817 start.go:139] virtualization: kvm guest
	I0501 02:20:46.818412   29817 out.go:177] * [functional-167406] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0501 02:20:46.819700   29817 notify.go:220] Checking for updates...
	I0501 02:20:46.819709   29817 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 02:20:46.821124   29817 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 02:20:46.822204   29817 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 02:20:46.823320   29817 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	I0501 02:20:46.824612   29817 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 02:20:46.825873   29817 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 02:20:46.827549   29817 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:20:46.828143   29817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:46.828218   29817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:46.843793   29817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45895
	I0501 02:20:46.844158   29817 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:46.844654   29817 main.go:141] libmachine: Using API Version  1
	I0501 02:20:46.844674   29817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:46.845033   29817 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:46.845190   29817 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:46.845418   29817 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 02:20:46.845680   29817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:20:46.845710   29817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:20:46.860498   29817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33395
	I0501 02:20:46.860905   29817 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:20:46.861320   29817 main.go:141] libmachine: Using API Version  1
	I0501 02:20:46.861342   29817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:20:46.861637   29817 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:20:46.861793   29817 main.go:141] libmachine: (functional-167406) Calling .DriverName
	I0501 02:20:46.892320   29817 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0501 02:20:46.893424   29817 start.go:297] selected driver: kvm2
	I0501 02:20:46.893440   29817 start.go:901] validating driver "kvm2" against &{Name:functional-167406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714386659-18769@sha256:2307161b966936863fe51493570c92a8ccd6d1ed9c62870159694db91f271d1e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-167406 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0501 02:20:46.894007   29817 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 02:20:46.896795   29817 out.go:177] 
	W0501 02:20:46.898259   29817 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0501 02:20:46.899605   29817 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (122.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
E0501 02:20:22.658944   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.39.209:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.39.209:8441: connect: connection refused
helpers_test.go:344: "storage-provisioner" [4b8999c0-090e-491d-9b39-9b6e98af676a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 45.005451171s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-167406 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-167406 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-167406 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-167406 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-167406 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-167406 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-167406 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-167406 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-167406 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f28107e6-5639-4abc-ba36-88e42c650337] Pending
helpers_test.go:344: "sp-pod" [f28107e6-5639-4abc-ba36-88e42c650337] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolumeclaim "myclaim" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "sp-pod" [f28107e6-5639-4abc-ba36-88e42c650337] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolume "pvc-a309fe1b-c5c2-42f6-8ede-a536fa4d0c71" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "sp-pod" [f28107e6-5639-4abc-ba36-88e42c650337] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f28107e6-5639-4abc-ba36-88e42c650337] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 46.004777022s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-167406 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-167406 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-167406 delete -f testdata/storage-provisioner/pod.yaml: (1.141055289s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-167406 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e5612cd3-51a4-4a42-979b-1f832fe2536e] Pending
helpers_test.go:344: "sp-pod" [e5612cd3-51a4-4a42-979b-1f832fe2536e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e5612cd3-51a4-4a42-979b-1f832fe2536e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004671391s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-167406 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (122.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh -n functional-167406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 cp functional-167406:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2548486059/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh -n functional-167406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh -n functional-167406 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/20785/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "sudo cat /etc/test/nested/copy/20785/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/20785.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "sudo cat /etc/ssl/certs/20785.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/20785.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "sudo cat /usr/share/ca-certificates/20785.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/207852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "sudo cat /etc/ssl/certs/207852.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/207852.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "sudo cat /usr/share/ca-certificates/207852.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 ssh "sudo systemctl is-active docker": exit status 1 (238.465984ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 ssh "sudo systemctl is-active crio": exit status 1 (214.896938ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-167406 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-167406
docker.io/library/minikube-local-cache-test:functional-167406
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-167406 image ls --format short --alsologtostderr:
I0501 02:20:49.750373   30351 out.go:291] Setting OutFile to fd 1 ...
I0501 02:20:49.750667   30351 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:49.750681   30351 out.go:304] Setting ErrFile to fd 2...
I0501 02:20:49.750687   30351 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:49.750991   30351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
I0501 02:20:49.751813   30351 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:49.751960   30351 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:49.752605   30351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:49.752670   30351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:49.770864   30351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
I0501 02:20:49.771456   30351 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:49.772193   30351 main.go:141] libmachine: Using API Version  1
I0501 02:20:49.772221   30351 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:49.772671   30351 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:49.772877   30351 main.go:141] libmachine: (functional-167406) Calling .GetState
I0501 02:20:49.774977   30351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:49.775017   30351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:49.790086   30351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36857
I0501 02:20:49.790491   30351 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:49.791079   30351 main.go:141] libmachine: Using API Version  1
I0501 02:20:49.791113   30351 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:49.791415   30351 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:49.791656   30351 main.go:141] libmachine: (functional-167406) Calling .DriverName
I0501 02:20:49.791869   30351 ssh_runner.go:195] Run: systemctl --version
I0501 02:20:49.791895   30351 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
I0501 02:20:49.794963   30351 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:49.795381   30351 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
I0501 02:20:49.795413   30351 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:49.795642   30351 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
I0501 02:20:49.795815   30351 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
I0501 02:20:49.795980   30351 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
I0501 02:20:49.796092   30351 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
I0501 02:20:49.896475   30351 ssh_runner.go:195] Run: sudo crictl images --output json
I0501 02:20:49.953368   30351 main.go:141] libmachine: Making call to close driver server
I0501 02:20:49.953379   30351 main.go:141] libmachine: (functional-167406) Calling .Close
I0501 02:20:49.953652   30351 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:20:49.953683   30351 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:20:49.953693   30351 main.go:141] libmachine: Making call to close driver server
I0501 02:20:49.953705   30351 main.go:141] libmachine: (functional-167406) Calling .Close
I0501 02:20:49.953926   30351 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
I0501 02:20:49.953983   30351 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:20:49.954027   30351 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-167406 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.30.0            | sha256:c42f13 | 32.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/library/minikube-local-cache-test | functional-167406  | sha256:1e7843 | 991B   |
| gcr.io/google-containers/addon-resizer      | functional-167406  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/kube-scheduler              | v1.30.0            | sha256:259c82 | 19.2MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/kube-proxy                  | v1.30.0            | sha256:a0bf55 | 29MB   |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4950bb | 27.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| registry.k8s.io/kube-controller-manager     | v1.30.0            | sha256:c7aad4 | 31MB   |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-167406 image ls --format table --alsologtostderr:
I0501 02:20:50.327971   30529 out.go:291] Setting OutFile to fd 1 ...
I0501 02:20:50.328230   30529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:50.328240   30529 out.go:304] Setting ErrFile to fd 2...
I0501 02:20:50.328244   30529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:50.328424   30529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
I0501 02:20:50.328959   30529 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:50.329046   30529 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:50.329391   30529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:50.329429   30529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:50.344367   30529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44617
I0501 02:20:50.344839   30529 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:50.345516   30529 main.go:141] libmachine: Using API Version  1
I0501 02:20:50.345537   30529 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:50.345839   30529 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:50.346005   30529 main.go:141] libmachine: (functional-167406) Calling .GetState
I0501 02:20:50.347731   30529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:50.347766   30529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:50.362029   30529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43655
I0501 02:20:50.362372   30529 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:50.362794   30529 main.go:141] libmachine: Using API Version  1
I0501 02:20:50.362817   30529 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:50.363204   30529 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:50.363406   30529 main.go:141] libmachine: (functional-167406) Calling .DriverName
I0501 02:20:50.363627   30529 ssh_runner.go:195] Run: systemctl --version
I0501 02:20:50.363652   30529 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
I0501 02:20:50.366105   30529 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:50.366541   30529 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
I0501 02:20:50.366571   30529 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:50.366698   30529 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
I0501 02:20:50.366861   30529 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
I0501 02:20:50.367010   30529 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
I0501 02:20:50.367179   30529 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
I0501 02:20:50.470899   30529 ssh_runner.go:195] Run: sudo crictl images --output json
I0501 02:20:50.536202   30529 main.go:141] libmachine: Making call to close driver server
I0501 02:20:50.536245   30529 main.go:141] libmachine: (functional-167406) Calling .Close
I0501 02:20:50.536524   30529 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:20:50.536545   30529 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:20:50.536553   30529 main.go:141] libmachine: Making call to close driver server
I0501 02:20:50.536560   30529 main.go:141] libmachine: (functional-167406) Calling .Close
I0501 02:20:50.536569   30529 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
I0501 02:20:50.536797   30529 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:20:50.536809   30529 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-167406 image ls --format json --alsologtostderr:
[{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"27755257"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b
6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-167406"],"size":"10823156"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"32663599"},{"id":"sha256:a0bf559e280cf431fceb938087d5
9deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"29020473"},{"id":"sha256:1e7843f1fbee2e56f2ac1d7980bd2a15d631dbd5638013ef4351ba8d393fc593","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-167406"],"size":"991"},{"id":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"31030110"},{"id":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"19208660"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a67
4fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-167406 image ls --format json --alsologtostderr:
I0501 02:20:50.077498   30421 out.go:291] Setting OutFile to fd 1 ...
I0501 02:20:50.077980   30421 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:50.077996   30421 out.go:304] Setting ErrFile to fd 2...
I0501 02:20:50.078003   30421 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:50.081498   30421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
I0501 02:20:50.082322   30421 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:50.082461   30421 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:50.082982   30421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:50.083034   30421 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:50.101672   30421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43593
I0501 02:20:50.102150   30421 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:50.102720   30421 main.go:141] libmachine: Using API Version  1
I0501 02:20:50.102742   30421 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:50.103145   30421 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:50.103348   30421 main.go:141] libmachine: (functional-167406) Calling .GetState
I0501 02:20:50.105247   30421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:50.105283   30421 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:50.119463   30421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
I0501 02:20:50.119829   30421 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:50.120243   30421 main.go:141] libmachine: Using API Version  1
I0501 02:20:50.120261   30421 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:50.120602   30421 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:50.120762   30421 main.go:141] libmachine: (functional-167406) Calling .DriverName
I0501 02:20:50.120936   30421 ssh_runner.go:195] Run: systemctl --version
I0501 02:20:50.120960   30421 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
I0501 02:20:50.123448   30421 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:50.123905   30421 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
I0501 02:20:50.123945   30421 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:50.123983   30421 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
I0501 02:20:50.124143   30421 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
I0501 02:20:50.124246   30421 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
I0501 02:20:50.124420   30421 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
I0501 02:20:50.214557   30421 ssh_runner.go:195] Run: sudo crictl images --output json
I0501 02:20:50.261765   30421 main.go:141] libmachine: Making call to close driver server
I0501 02:20:50.261782   30421 main.go:141] libmachine: (functional-167406) Calling .Close
I0501 02:20:50.261993   30421 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:20:50.262010   30421 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:20:50.262019   30421 main.go:141] libmachine: Making call to close driver server
I0501 02:20:50.262023   30421 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
I0501 02:20:50.262040   30421 main.go:141] libmachine: (functional-167406) Calling .Close
I0501 02:20:50.262254   30421 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:20:50.262270   30421 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:20:50.262283   30421 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-167406 image ls --format yaml --alsologtostderr:
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-167406
size: "10823156"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "32663599"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "27755257"
- id: sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "31030110"
- id: sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "29020473"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:1e7843f1fbee2e56f2ac1d7980bd2a15d631dbd5638013ef4351ba8d393fc593
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-167406
size: "991"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "19208660"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-167406 image ls --format yaml --alsologtostderr:
I0501 02:20:49.794959   30374 out.go:291] Setting OutFile to fd 1 ...
I0501 02:20:49.795101   30374 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:49.795114   30374 out.go:304] Setting ErrFile to fd 2...
I0501 02:20:49.795121   30374 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:49.795417   30374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
I0501 02:20:49.796104   30374 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:49.796248   30374 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:49.796859   30374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:49.796905   30374 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:49.812739   30374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37051
I0501 02:20:49.813181   30374 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:49.813707   30374 main.go:141] libmachine: Using API Version  1
I0501 02:20:49.813735   30374 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:49.814172   30374 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:49.814425   30374 main.go:141] libmachine: (functional-167406) Calling .GetState
I0501 02:20:49.816492   30374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:49.816537   30374 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:49.830968   30374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44031
I0501 02:20:49.831378   30374 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:49.831868   30374 main.go:141] libmachine: Using API Version  1
I0501 02:20:49.831894   30374 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:49.832264   30374 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:49.832464   30374 main.go:141] libmachine: (functional-167406) Calling .DriverName
I0501 02:20:49.832680   30374 ssh_runner.go:195] Run: systemctl --version
I0501 02:20:49.832701   30374 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
I0501 02:20:49.834882   30374 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:49.835259   30374 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
I0501 02:20:49.835288   30374 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:49.835404   30374 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
I0501 02:20:49.835592   30374 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
I0501 02:20:49.835739   30374 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
I0501 02:20:49.835875   30374 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
I0501 02:20:49.934784   30374 ssh_runner.go:195] Run: sudo crictl images --output json
I0501 02:20:50.009135   30374 main.go:141] libmachine: Making call to close driver server
I0501 02:20:50.009152   30374 main.go:141] libmachine: (functional-167406) Calling .Close
I0501 02:20:50.009450   30374 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:20:50.009483   30374 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:20:50.009492   30374 main.go:141] libmachine: Making call to close driver server
I0501 02:20:50.009498   30374 main.go:141] libmachine: (functional-167406) Calling .Close
I0501 02:20:50.009781   30374 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
I0501 02:20:50.009879   30374 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:20:50.009936   30374 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 ssh pgrep buildkitd: exit status 1 (215.815817ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image build -t localhost/my-image:functional-167406 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 image build -t localhost/my-image:functional-167406 testdata/build --alsologtostderr: (4.109950721s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-167406 image build -t localhost/my-image:functional-167406 testdata/build --alsologtostderr:
I0501 02:20:50.247774   30488 out.go:291] Setting OutFile to fd 1 ...
I0501 02:20:50.248089   30488 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:50.248100   30488 out.go:304] Setting ErrFile to fd 2...
I0501 02:20:50.248111   30488 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0501 02:20:50.248279   30488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
I0501 02:20:50.248822   30488 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:50.249317   30488 config.go:182] Loaded profile config "functional-167406": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
I0501 02:20:50.249654   30488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:50.249688   30488 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:50.268299   30488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43815
I0501 02:20:50.268806   30488 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:50.269357   30488 main.go:141] libmachine: Using API Version  1
I0501 02:20:50.269380   30488 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:50.269753   30488 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:50.269944   30488 main.go:141] libmachine: (functional-167406) Calling .GetState
I0501 02:20:50.272052   30488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0501 02:20:50.272108   30488 main.go:141] libmachine: Launching plugin server for driver kvm2
I0501 02:20:50.288653   30488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46847
I0501 02:20:50.289116   30488 main.go:141] libmachine: () Calling .GetVersion
I0501 02:20:50.289597   30488 main.go:141] libmachine: Using API Version  1
I0501 02:20:50.289619   30488 main.go:141] libmachine: () Calling .SetConfigRaw
I0501 02:20:50.290070   30488 main.go:141] libmachine: () Calling .GetMachineName
I0501 02:20:50.290263   30488 main.go:141] libmachine: (functional-167406) Calling .DriverName
I0501 02:20:50.290449   30488 ssh_runner.go:195] Run: systemctl --version
I0501 02:20:50.290473   30488 main.go:141] libmachine: (functional-167406) Calling .GetSSHHostname
I0501 02:20:50.293186   30488 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:50.293540   30488 main.go:141] libmachine: (functional-167406) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f2:fc:c7", ip: ""} in network mk-functional-167406: {Iface:virbr1 ExpiryTime:2024-05-01 03:17:13 +0000 UTC Type:0 Mac:52:54:00:f2:fc:c7 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-167406 Clientid:01:52:54:00:f2:fc:c7}
I0501 02:20:50.293580   30488 main.go:141] libmachine: (functional-167406) DBG | domain functional-167406 has defined IP address 192.168.39.209 and MAC address 52:54:00:f2:fc:c7 in network mk-functional-167406
I0501 02:20:50.293825   30488 main.go:141] libmachine: (functional-167406) Calling .GetSSHPort
I0501 02:20:50.293985   30488 main.go:141] libmachine: (functional-167406) Calling .GetSSHKeyPath
I0501 02:20:50.294116   30488 main.go:141] libmachine: (functional-167406) Calling .GetSSHUsername
I0501 02:20:50.294245   30488 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/functional-167406/id_rsa Username:docker}
I0501 02:20:50.388263   30488 build_images.go:161] Building image from path: /tmp/build.944891164.tar
I0501 02:20:50.388314   30488 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0501 02:20:50.406897   30488 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.944891164.tar
I0501 02:20:50.412151   30488 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.944891164.tar: stat -c "%s %y" /var/lib/minikube/build/build.944891164.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.944891164.tar': No such file or directory
I0501 02:20:50.412183   30488 ssh_runner.go:362] scp /tmp/build.944891164.tar --> /var/lib/minikube/build/build.944891164.tar (3072 bytes)
I0501 02:20:50.442448   30488 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.944891164
I0501 02:20:50.455041   30488 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.944891164 -xf /var/lib/minikube/build/build.944891164.tar
I0501 02:20:50.466069   30488 containerd.go:394] Building image: /var/lib/minikube/build/build.944891164
I0501 02:20:50.466141   30488 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.944891164 --local dockerfile=/var/lib/minikube/build/build.944891164 --output type=image,name=localhost/my-image:functional-167406
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:c9afcab9d4a1212f64210a56174779e1e20c49bc175ddfc816cc36d519485fb4
#8 exporting manifest sha256:c9afcab9d4a1212f64210a56174779e1e20c49bc175ddfc816cc36d519485fb4 0.0s done
#8 exporting config sha256:50d43e95fc9c15272a5952c5a85309148e37c9095f664288d804763c25aa76c2 0.0s done
#8 naming to localhost/my-image:functional-167406 done
#8 DONE 0.2s
I0501 02:20:54.234518   30488 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.944891164 --local dockerfile=/var/lib/minikube/build/build.944891164 --output type=image,name=localhost/my-image:functional-167406: (3.768335708s)
I0501 02:20:54.234603   30488 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.944891164
I0501 02:20:54.259503   30488 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.944891164.tar
I0501 02:20:54.277675   30488 build_images.go:217] Built localhost/my-image:functional-167406 from /tmp/build.944891164.tar
I0501 02:20:54.277707   30488 build_images.go:133] succeeded building to: functional-167406
I0501 02:20:54.277712   30488 build_images.go:134] failed building to: 
I0501 02:20:54.277734   30488 main.go:141] libmachine: Making call to close driver server
I0501 02:20:54.277744   30488 main.go:141] libmachine: (functional-167406) Calling .Close
I0501 02:20:54.278003   30488 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:20:54.278015   30488 main.go:141] libmachine: (functional-167406) DBG | Closing plugin on server side
I0501 02:20:54.278023   30488 main.go:141] libmachine: Making call to close connection to plugin binary
I0501 02:20:54.278034   30488 main.go:141] libmachine: Making call to close driver server
I0501 02:20:54.278042   30488 main.go:141] libmachine: (functional-167406) Calling .Close
I0501 02:20:54.278270   30488 main.go:141] libmachine: Successfully made call to close driver server
I0501 02:20:54.278294   30488 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.262483909s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-167406
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image load --daemon gcr.io/google-containers/addon-resizer:functional-167406 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 image load --daemon gcr.io/google-containers/addon-resizer:functional-167406 --alsologtostderr: (3.873296487s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image load --daemon gcr.io/google-containers/addon-resizer:functional-167406 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 image load --daemon gcr.io/google-containers/addon-resizer:functional-167406 --alsologtostderr: (2.571215786s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.674923432s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-167406
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image load --daemon gcr.io/google-containers/addon-resizer:functional-167406 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 image load --daemon gcr.io/google-containers/addon-resizer:functional-167406 --alsologtostderr: (4.245368422s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image save gcr.io/google-containers/addon-resizer:functional-167406 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image rm gcr.io/google-containers/addon-resizer:functional-167406 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-167406 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.216776266s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-167406
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 image save --daemon gcr.io/google-containers/addon-resizer:functional-167406 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-167406
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "211.013694ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "55.010498ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "207.033511ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "60.227823ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdspecific-port3491307736/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (246.493589ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdspecific-port3491307736/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 ssh "sudo umount -f /mount-9p": exit status 1 (198.921354ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-167406 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdspecific-port3491307736/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-167406 ssh "findmnt -T" /mount1: exit status 1 (248.340345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-167406 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-167406 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1756776039/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-167406 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-167406
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-167406
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-167406
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (280.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-965643 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0501 02:22:38.815219   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:23:06.500148   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:25:16.314631   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:16.319979   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:16.330253   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:16.350538   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:16.390802   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:16.471148   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:16.631576   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:16.952149   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:17.593091   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:18.873684   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:21.434469   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:26.555186   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:36.796295   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:25:57.276763   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:26:38.237959   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-965643 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m40.160359682s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (280.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-965643 -- rollout status deployment/busybox: (4.742685964s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-fwrdm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-sprtr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-xppc2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-fwrdm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-sprtr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-xppc2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-fwrdm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-sprtr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-xppc2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-fwrdm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-fwrdm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-sprtr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-sprtr -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-xppc2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-965643 -- exec busybox-fc5497c4f-xppc2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-965643 -v=7 --alsologtostderr
E0501 02:27:38.813277   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-965643 -v=7 --alsologtostderr: (47.706819994s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-965643 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp testdata/cp-test.txt ha-965643:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1481903732/001/cp-test_ha-965643.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643:/home/docker/cp-test.txt ha-965643-m02:/home/docker/cp-test_ha-965643_ha-965643-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m02 "sudo cat /home/docker/cp-test_ha-965643_ha-965643-m02.txt"
E0501 02:28:00.158660   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643:/home/docker/cp-test.txt ha-965643-m03:/home/docker/cp-test_ha-965643_ha-965643-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m03 "sudo cat /home/docker/cp-test_ha-965643_ha-965643-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643:/home/docker/cp-test.txt ha-965643-m04:/home/docker/cp-test_ha-965643_ha-965643-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m04 "sudo cat /home/docker/cp-test_ha-965643_ha-965643-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp testdata/cp-test.txt ha-965643-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1481903732/001/cp-test_ha-965643-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m02:/home/docker/cp-test.txt ha-965643:/home/docker/cp-test_ha-965643-m02_ha-965643.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643 "sudo cat /home/docker/cp-test_ha-965643-m02_ha-965643.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m02:/home/docker/cp-test.txt ha-965643-m03:/home/docker/cp-test_ha-965643-m02_ha-965643-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m03 "sudo cat /home/docker/cp-test_ha-965643-m02_ha-965643-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m02:/home/docker/cp-test.txt ha-965643-m04:/home/docker/cp-test_ha-965643-m02_ha-965643-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m04 "sudo cat /home/docker/cp-test_ha-965643-m02_ha-965643-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp testdata/cp-test.txt ha-965643-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1481903732/001/cp-test_ha-965643-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m03:/home/docker/cp-test.txt ha-965643:/home/docker/cp-test_ha-965643-m03_ha-965643.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643 "sudo cat /home/docker/cp-test_ha-965643-m03_ha-965643.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m03:/home/docker/cp-test.txt ha-965643-m02:/home/docker/cp-test_ha-965643-m03_ha-965643-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m02 "sudo cat /home/docker/cp-test_ha-965643-m03_ha-965643-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m03:/home/docker/cp-test.txt ha-965643-m04:/home/docker/cp-test_ha-965643-m03_ha-965643-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m04 "sudo cat /home/docker/cp-test_ha-965643-m03_ha-965643-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp testdata/cp-test.txt ha-965643-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1481903732/001/cp-test_ha-965643-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m04:/home/docker/cp-test.txt ha-965643:/home/docker/cp-test_ha-965643-m04_ha-965643.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643 "sudo cat /home/docker/cp-test_ha-965643-m04_ha-965643.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m04:/home/docker/cp-test.txt ha-965643-m02:/home/docker/cp-test_ha-965643-m04_ha-965643-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m02 "sudo cat /home/docker/cp-test_ha-965643-m04_ha-965643-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 cp ha-965643-m04:/home/docker/cp-test.txt ha-965643-m03:/home/docker/cp-test_ha-965643-m04_ha-965643-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 ssh -n ha-965643-m03 "sudo cat /home/docker/cp-test_ha-965643-m04_ha-965643-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (92.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-965643 node stop m02 -v=7 --alsologtostderr: (1m31.777578129s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-965643 status -v=7 --alsologtostderr: exit status 7 (667.282644ms)

                                                
                                                
-- stdout --
	ha-965643
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-965643-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-965643-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-965643-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:29:43.225378   35830 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:29:43.225509   35830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:29:43.225520   35830 out.go:304] Setting ErrFile to fd 2...
	I0501 02:29:43.225526   35830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:29:43.225726   35830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 02:29:43.225918   35830 out.go:298] Setting JSON to false
	I0501 02:29:43.225949   35830 mustload.go:65] Loading cluster: ha-965643
	I0501 02:29:43.226049   35830 notify.go:220] Checking for updates...
	I0501 02:29:43.226397   35830 config.go:182] Loaded profile config "ha-965643": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:29:43.226414   35830 status.go:255] checking status of ha-965643 ...
	I0501 02:29:43.226856   35830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:29:43.226931   35830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:29:43.242639   35830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43731
	I0501 02:29:43.243086   35830 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:29:43.243709   35830 main.go:141] libmachine: Using API Version  1
	I0501 02:29:43.243742   35830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:29:43.244072   35830 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:29:43.244276   35830 main.go:141] libmachine: (ha-965643) Calling .GetState
	I0501 02:29:43.245932   35830 status.go:330] ha-965643 host status = "Running" (err=<nil>)
	I0501 02:29:43.245951   35830 host.go:66] Checking if "ha-965643" exists ...
	I0501 02:29:43.246219   35830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:29:43.246267   35830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:29:43.262058   35830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0501 02:29:43.262523   35830 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:29:43.262973   35830 main.go:141] libmachine: Using API Version  1
	I0501 02:29:43.262996   35830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:29:43.263323   35830 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:29:43.263499   35830 main.go:141] libmachine: (ha-965643) Calling .GetIP
	I0501 02:29:43.266158   35830 main.go:141] libmachine: (ha-965643) DBG | domain ha-965643 has defined MAC address 52:54:00:5f:af:bf in network mk-ha-965643
	I0501 02:29:43.266605   35830 main.go:141] libmachine: (ha-965643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:af:bf", ip: ""} in network mk-ha-965643: {Iface:virbr1 ExpiryTime:2024-05-01 03:22:35 +0000 UTC Type:0 Mac:52:54:00:5f:af:bf Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-965643 Clientid:01:52:54:00:5f:af:bf}
	I0501 02:29:43.266633   35830 main.go:141] libmachine: (ha-965643) DBG | domain ha-965643 has defined IP address 192.168.39.103 and MAC address 52:54:00:5f:af:bf in network mk-ha-965643
	I0501 02:29:43.266745   35830 host.go:66] Checking if "ha-965643" exists ...
	I0501 02:29:43.267194   35830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:29:43.267241   35830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:29:43.282639   35830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I0501 02:29:43.283070   35830 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:29:43.283555   35830 main.go:141] libmachine: Using API Version  1
	I0501 02:29:43.283590   35830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:29:43.283921   35830 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:29:43.284130   35830 main.go:141] libmachine: (ha-965643) Calling .DriverName
	I0501 02:29:43.284330   35830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:29:43.284350   35830 main.go:141] libmachine: (ha-965643) Calling .GetSSHHostname
	I0501 02:29:43.286817   35830 main.go:141] libmachine: (ha-965643) DBG | domain ha-965643 has defined MAC address 52:54:00:5f:af:bf in network mk-ha-965643
	I0501 02:29:43.287227   35830 main.go:141] libmachine: (ha-965643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:af:bf", ip: ""} in network mk-ha-965643: {Iface:virbr1 ExpiryTime:2024-05-01 03:22:35 +0000 UTC Type:0 Mac:52:54:00:5f:af:bf Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-965643 Clientid:01:52:54:00:5f:af:bf}
	I0501 02:29:43.287253   35830 main.go:141] libmachine: (ha-965643) DBG | domain ha-965643 has defined IP address 192.168.39.103 and MAC address 52:54:00:5f:af:bf in network mk-ha-965643
	I0501 02:29:43.287438   35830 main.go:141] libmachine: (ha-965643) Calling .GetSSHPort
	I0501 02:29:43.287584   35830 main.go:141] libmachine: (ha-965643) Calling .GetSSHKeyPath
	I0501 02:29:43.287730   35830 main.go:141] libmachine: (ha-965643) Calling .GetSSHUsername
	I0501 02:29:43.287874   35830 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/ha-965643/id_rsa Username:docker}
	I0501 02:29:43.383642   35830 ssh_runner.go:195] Run: systemctl --version
	I0501 02:29:43.391778   35830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:29:43.411653   35830 kubeconfig.go:125] found "ha-965643" server: "https://192.168.39.254:8443"
	I0501 02:29:43.411680   35830 api_server.go:166] Checking apiserver status ...
	I0501 02:29:43.411711   35830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:29:43.431696   35830 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup
	W0501 02:29:43.444598   35830 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:29:43.444667   35830 ssh_runner.go:195] Run: ls
	I0501 02:29:43.449910   35830 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:29:43.454816   35830 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:29:43.454836   35830 status.go:422] ha-965643 apiserver status = Running (err=<nil>)
	I0501 02:29:43.454845   35830 status.go:257] ha-965643 status: &{Name:ha-965643 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:29:43.454860   35830 status.go:255] checking status of ha-965643-m02 ...
	I0501 02:29:43.455184   35830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:29:43.455217   35830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:29:43.470439   35830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33575
	I0501 02:29:43.470901   35830 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:29:43.471421   35830 main.go:141] libmachine: Using API Version  1
	I0501 02:29:43.471446   35830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:29:43.471746   35830 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:29:43.471981   35830 main.go:141] libmachine: (ha-965643-m02) Calling .GetState
	I0501 02:29:43.473636   35830 status.go:330] ha-965643-m02 host status = "Stopped" (err=<nil>)
	I0501 02:29:43.473665   35830 status.go:343] host is not running, skipping remaining checks
	I0501 02:29:43.473673   35830 status.go:257] ha-965643-m02 status: &{Name:ha-965643-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:29:43.473693   35830 status.go:255] checking status of ha-965643-m03 ...
	I0501 02:29:43.473965   35830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:29:43.474000   35830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:29:43.489340   35830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0501 02:29:43.489725   35830 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:29:43.490156   35830 main.go:141] libmachine: Using API Version  1
	I0501 02:29:43.490180   35830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:29:43.490507   35830 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:29:43.490706   35830 main.go:141] libmachine: (ha-965643-m03) Calling .GetState
	I0501 02:29:43.492215   35830 status.go:330] ha-965643-m03 host status = "Running" (err=<nil>)
	I0501 02:29:43.492232   35830 host.go:66] Checking if "ha-965643-m03" exists ...
	I0501 02:29:43.492647   35830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:29:43.492688   35830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:29:43.506616   35830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34435
	I0501 02:29:43.507019   35830 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:29:43.507516   35830 main.go:141] libmachine: Using API Version  1
	I0501 02:29:43.507536   35830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:29:43.507856   35830 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:29:43.508032   35830 main.go:141] libmachine: (ha-965643-m03) Calling .GetIP
	I0501 02:29:43.510850   35830 main.go:141] libmachine: (ha-965643-m03) DBG | domain ha-965643-m03 has defined MAC address 52:54:00:c0:97:8b in network mk-ha-965643
	I0501 02:29:43.511254   35830 main.go:141] libmachine: (ha-965643-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:97:8b", ip: ""} in network mk-ha-965643: {Iface:virbr1 ExpiryTime:2024-05-01 03:26:02 +0000 UTC Type:0 Mac:52:54:00:c0:97:8b Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-965643-m03 Clientid:01:52:54:00:c0:97:8b}
	I0501 02:29:43.511276   35830 main.go:141] libmachine: (ha-965643-m03) DBG | domain ha-965643-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:c0:97:8b in network mk-ha-965643
	I0501 02:29:43.511408   35830 host.go:66] Checking if "ha-965643-m03" exists ...
	I0501 02:29:43.511709   35830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:29:43.511747   35830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:29:43.525966   35830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40307
	I0501 02:29:43.526314   35830 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:29:43.526780   35830 main.go:141] libmachine: Using API Version  1
	I0501 02:29:43.526798   35830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:29:43.527101   35830 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:29:43.527311   35830 main.go:141] libmachine: (ha-965643-m03) Calling .DriverName
	I0501 02:29:43.527501   35830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:29:43.527531   35830 main.go:141] libmachine: (ha-965643-m03) Calling .GetSSHHostname
	I0501 02:29:43.530338   35830 main.go:141] libmachine: (ha-965643-m03) DBG | domain ha-965643-m03 has defined MAC address 52:54:00:c0:97:8b in network mk-ha-965643
	I0501 02:29:43.530740   35830 main.go:141] libmachine: (ha-965643-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:97:8b", ip: ""} in network mk-ha-965643: {Iface:virbr1 ExpiryTime:2024-05-01 03:26:02 +0000 UTC Type:0 Mac:52:54:00:c0:97:8b Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-965643-m03 Clientid:01:52:54:00:c0:97:8b}
	I0501 02:29:43.530763   35830 main.go:141] libmachine: (ha-965643-m03) DBG | domain ha-965643-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:c0:97:8b in network mk-ha-965643
	I0501 02:29:43.531022   35830 main.go:141] libmachine: (ha-965643-m03) Calling .GetSSHPort
	I0501 02:29:43.531210   35830 main.go:141] libmachine: (ha-965643-m03) Calling .GetSSHKeyPath
	I0501 02:29:43.531326   35830 main.go:141] libmachine: (ha-965643-m03) Calling .GetSSHUsername
	I0501 02:29:43.531434   35830 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/ha-965643-m03/id_rsa Username:docker}
	I0501 02:29:43.621908   35830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:29:43.643084   35830 kubeconfig.go:125] found "ha-965643" server: "https://192.168.39.254:8443"
	I0501 02:29:43.643135   35830 api_server.go:166] Checking apiserver status ...
	I0501 02:29:43.643179   35830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:29:43.658695   35830 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1272/cgroup
	W0501 02:29:43.669863   35830 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1272/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:29:43.669913   35830 ssh_runner.go:195] Run: ls
	I0501 02:29:43.675701   35830 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0501 02:29:43.680083   35830 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0501 02:29:43.680106   35830 status.go:422] ha-965643-m03 apiserver status = Running (err=<nil>)
	I0501 02:29:43.680114   35830 status.go:257] ha-965643-m03 status: &{Name:ha-965643-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:29:43.680128   35830 status.go:255] checking status of ha-965643-m04 ...
	I0501 02:29:43.680407   35830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:29:43.680448   35830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:29:43.694744   35830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
	I0501 02:29:43.695154   35830 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:29:43.695547   35830 main.go:141] libmachine: Using API Version  1
	I0501 02:29:43.695564   35830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:29:43.695923   35830 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:29:43.696097   35830 main.go:141] libmachine: (ha-965643-m04) Calling .GetState
	I0501 02:29:43.697759   35830 status.go:330] ha-965643-m04 host status = "Running" (err=<nil>)
	I0501 02:29:43.697776   35830 host.go:66] Checking if "ha-965643-m04" exists ...
	I0501 02:29:43.698054   35830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:29:43.698086   35830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:29:43.711720   35830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0501 02:29:43.712120   35830 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:29:43.712568   35830 main.go:141] libmachine: Using API Version  1
	I0501 02:29:43.712598   35830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:29:43.712978   35830 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:29:43.713179   35830 main.go:141] libmachine: (ha-965643-m04) Calling .GetIP
	I0501 02:29:43.715939   35830 main.go:141] libmachine: (ha-965643-m04) DBG | domain ha-965643-m04 has defined MAC address 52:54:00:e3:6e:29 in network mk-ha-965643
	I0501 02:29:43.716404   35830 main.go:141] libmachine: (ha-965643-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6e:29", ip: ""} in network mk-ha-965643: {Iface:virbr1 ExpiryTime:2024-05-01 03:27:25 +0000 UTC Type:0 Mac:52:54:00:e3:6e:29 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-965643-m04 Clientid:01:52:54:00:e3:6e:29}
	I0501 02:29:43.716425   35830 main.go:141] libmachine: (ha-965643-m04) DBG | domain ha-965643-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:e3:6e:29 in network mk-ha-965643
	I0501 02:29:43.716586   35830 host.go:66] Checking if "ha-965643-m04" exists ...
	I0501 02:29:43.716874   35830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:29:43.716906   35830 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:29:43.732456   35830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35421
	I0501 02:29:43.732823   35830 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:29:43.733253   35830 main.go:141] libmachine: Using API Version  1
	I0501 02:29:43.733288   35830 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:29:43.733573   35830 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:29:43.733745   35830 main.go:141] libmachine: (ha-965643-m04) Calling .DriverName
	I0501 02:29:43.733903   35830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:29:43.733926   35830 main.go:141] libmachine: (ha-965643-m04) Calling .GetSSHHostname
	I0501 02:29:43.736481   35830 main.go:141] libmachine: (ha-965643-m04) DBG | domain ha-965643-m04 has defined MAC address 52:54:00:e3:6e:29 in network mk-ha-965643
	I0501 02:29:43.736879   35830 main.go:141] libmachine: (ha-965643-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:6e:29", ip: ""} in network mk-ha-965643: {Iface:virbr1 ExpiryTime:2024-05-01 03:27:25 +0000 UTC Type:0 Mac:52:54:00:e3:6e:29 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-965643-m04 Clientid:01:52:54:00:e3:6e:29}
	I0501 02:29:43.736910   35830 main.go:141] libmachine: (ha-965643-m04) DBG | domain ha-965643-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:e3:6e:29 in network mk-ha-965643
	I0501 02:29:43.737026   35830 main.go:141] libmachine: (ha-965643-m04) Calling .GetSSHPort
	I0501 02:29:43.737182   35830 main.go:141] libmachine: (ha-965643-m04) Calling .GetSSHKeyPath
	I0501 02:29:43.737312   35830 main.go:141] libmachine: (ha-965643-m04) Calling .GetSSHUsername
	I0501 02:29:43.737449   35830 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/ha-965643-m04/id_rsa Username:docker}
	I0501 02:29:43.820219   35830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:29:43.836563   35830 status.go:257] ha-965643-m04 status: &{Name:ha-965643-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (92.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 node start m02 -v=7 --alsologtostderr
E0501 02:30:16.314596   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-965643 node start m02 -v=7 --alsologtostderr: (44.456212806s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (45.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (475.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-965643 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-965643 -v=7 --alsologtostderr
E0501 02:30:43.999226   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:32:38.813637   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 02:34:01.860592   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-965643 -v=7 --alsologtostderr: (4m38.720077938s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-965643 --wait=true -v=7 --alsologtostderr
E0501 02:35:16.315289   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:37:38.813138   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-965643 --wait=true -v=7 --alsologtostderr: (3m16.220146387s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-965643
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (475.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-965643 node delete m03 -v=7 --alsologtostderr: (6.398051591s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (275.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 stop -v=7 --alsologtostderr
E0501 02:40:16.315050   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:41:39.359937   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:42:38.813810   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-965643 stop -v=7 --alsologtostderr: (4m35.636323535s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-965643 status -v=7 --alsologtostderr: exit status 7 (111.860747ms)

                                                
                                                
-- stdout --
	ha-965643
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-965643-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-965643-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:43:08.452419   39754 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:43:08.452691   39754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:43:08.452701   39754 out.go:304] Setting ErrFile to fd 2...
	I0501 02:43:08.452706   39754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:43:08.452916   39754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 02:43:08.453081   39754 out.go:298] Setting JSON to false
	I0501 02:43:08.453108   39754 mustload.go:65] Loading cluster: ha-965643
	I0501 02:43:08.453167   39754 notify.go:220] Checking for updates...
	I0501 02:43:08.453687   39754 config.go:182] Loaded profile config "ha-965643": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:43:08.453709   39754 status.go:255] checking status of ha-965643 ...
	I0501 02:43:08.454136   39754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:43:08.454187   39754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:43:08.472066   39754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39627
	I0501 02:43:08.472444   39754 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:43:08.473081   39754 main.go:141] libmachine: Using API Version  1
	I0501 02:43:08.473121   39754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:43:08.473424   39754 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:43:08.473637   39754 main.go:141] libmachine: (ha-965643) Calling .GetState
	I0501 02:43:08.475360   39754 status.go:330] ha-965643 host status = "Stopped" (err=<nil>)
	I0501 02:43:08.475375   39754 status.go:343] host is not running, skipping remaining checks
	I0501 02:43:08.475380   39754 status.go:257] ha-965643 status: &{Name:ha-965643 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:43:08.475412   39754 status.go:255] checking status of ha-965643-m02 ...
	I0501 02:43:08.475682   39754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:43:08.475715   39754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:43:08.490115   39754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41373
	I0501 02:43:08.490557   39754 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:43:08.491011   39754 main.go:141] libmachine: Using API Version  1
	I0501 02:43:08.491035   39754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:43:08.491380   39754 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:43:08.491542   39754 main.go:141] libmachine: (ha-965643-m02) Calling .GetState
	I0501 02:43:08.493076   39754 status.go:330] ha-965643-m02 host status = "Stopped" (err=<nil>)
	I0501 02:43:08.493091   39754 status.go:343] host is not running, skipping remaining checks
	I0501 02:43:08.493098   39754 status.go:257] ha-965643-m02 status: &{Name:ha-965643-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:43:08.493115   39754 status.go:255] checking status of ha-965643-m04 ...
	I0501 02:43:08.493383   39754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:43:08.493412   39754 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:43:08.507684   39754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
	I0501 02:43:08.508069   39754 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:43:08.508522   39754 main.go:141] libmachine: Using API Version  1
	I0501 02:43:08.508543   39754 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:43:08.508870   39754 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:43:08.509067   39754 main.go:141] libmachine: (ha-965643-m04) Calling .GetState
	I0501 02:43:08.510624   39754 status.go:330] ha-965643-m04 host status = "Stopped" (err=<nil>)
	I0501 02:43:08.510637   39754 status.go:343] host is not running, skipping remaining checks
	I0501 02:43:08.510643   39754 status.go:257] ha-965643-m04 status: &{Name:ha-965643-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (275.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (158.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-965643 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0501 02:45:16.314613   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-965643 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m38.051290809s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (158.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-965643 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-965643 --control-plane -v=7 --alsologtostderr: (1m10.436885689s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-965643 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-062148 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0501 02:47:38.813478   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-062148 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m1.543235567s)
--- PASS: TestJSONOutput/start/Command (61.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-062148 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-062148 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-062148 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-062148 --output=json --user=testUser: (7.348546459s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-357593 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-357593 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.341537ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"34ed1bfb-bcb5-4424-a08b-9567752b98e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-357593] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f993df92-a7f8-4c4d-8981-dcd63fbf6c09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18779"}}
	{"specversion":"1.0","id":"84a80f4e-72c9-4031-8047-db01c559b041","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0337e8ea-49d1-41f1-a63c-103e75c62612","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig"}}
	{"specversion":"1.0","id":"1dc05e80-6a61-4c5d-8af0-e2eb32823ef9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube"}}
	{"specversion":"1.0","id":"115d8c8f-cb38-408a-bbbd-294b4e64c7d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1efd354b-e4e4-40b4-9534-e6d45474628f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bd1b822f-4510-486c-ae15-23863c2e7812","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-357593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-357593
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (95.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-226408 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-226408 --driver=kvm2  --container-runtime=containerd: (46.090932602s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-228628 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-228628 --driver=kvm2  --container-runtime=containerd: (47.150130733s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-226408
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-228628
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-228628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-228628
helpers_test.go:175: Cleaning up "first-226408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-226408
--- PASS: TestMinikubeProfile (95.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-358000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0501 02:50:16.314969   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-358000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.886563749s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.55s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-358000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-358000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-374598 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0501 02:50:41.861475   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-374598 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.430836798s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-374598 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-374598 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.95s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-358000 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-374598 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-374598 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-374598
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-374598: (1.369924665s)
--- PASS: TestMountStart/serial/Stop (1.37s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-374598
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-374598: (25.366365169s)
--- PASS: TestMountStart/serial/RestartStopped (26.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-374598 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-374598 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-545652 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0501 02:52:38.814125   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-545652 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m45.939304265s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-545652 -- rollout status deployment/busybox: (4.224908303s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- exec busybox-fc5497c4f-bgkz7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- exec busybox-fc5497c4f-r8lr9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- exec busybox-fc5497c4f-bgkz7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- exec busybox-fc5497c4f-r8lr9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- exec busybox-fc5497c4f-bgkz7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- exec busybox-fc5497c4f-r8lr9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- exec busybox-fc5497c4f-bgkz7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- exec busybox-fc5497c4f-bgkz7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- exec busybox-fc5497c4f-r8lr9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-545652 -- exec busybox-fc5497c4f-r8lr9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-545652 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-545652 -v 3 --alsologtostderr: (46.167606086s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.75s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-545652 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp testdata/cp-test.txt multinode-545652:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp multinode-545652:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3310997430/001/cp-test_multinode-545652.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp multinode-545652:/home/docker/cp-test.txt multinode-545652-m02:/home/docker/cp-test_multinode-545652_multinode-545652-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m02 "sudo cat /home/docker/cp-test_multinode-545652_multinode-545652-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp multinode-545652:/home/docker/cp-test.txt multinode-545652-m03:/home/docker/cp-test_multinode-545652_multinode-545652-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m03 "sudo cat /home/docker/cp-test_multinode-545652_multinode-545652-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp testdata/cp-test.txt multinode-545652-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp multinode-545652-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3310997430/001/cp-test_multinode-545652-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp multinode-545652-m02:/home/docker/cp-test.txt multinode-545652:/home/docker/cp-test_multinode-545652-m02_multinode-545652.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652 "sudo cat /home/docker/cp-test_multinode-545652-m02_multinode-545652.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp multinode-545652-m02:/home/docker/cp-test.txt multinode-545652-m03:/home/docker/cp-test_multinode-545652-m02_multinode-545652-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m03 "sudo cat /home/docker/cp-test_multinode-545652-m02_multinode-545652-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp testdata/cp-test.txt multinode-545652-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp multinode-545652-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3310997430/001/cp-test_multinode-545652-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp multinode-545652-m03:/home/docker/cp-test.txt multinode-545652:/home/docker/cp-test_multinode-545652-m03_multinode-545652.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652 "sudo cat /home/docker/cp-test_multinode-545652-m03_multinode-545652.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 cp multinode-545652-m03:/home/docker/cp-test.txt multinode-545652-m02:/home/docker/cp-test_multinode-545652-m03_multinode-545652-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 ssh -n multinode-545652-m02 "sudo cat /home/docker/cp-test_multinode-545652-m03_multinode-545652-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-545652 node stop m03: (1.472073511s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-545652 status: exit status 7 (433.420234ms)

                                                
                                                
-- stdout --
	multinode-545652
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-545652-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-545652-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-545652 status --alsologtostderr: exit status 7 (440.53544ms)

                                                
                                                
-- stdout --
	multinode-545652
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-545652-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-545652-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 02:54:11.834938   47296 out.go:291] Setting OutFile to fd 1 ...
	I0501 02:54:11.835063   47296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:54:11.835074   47296 out.go:304] Setting ErrFile to fd 2...
	I0501 02:54:11.835078   47296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 02:54:11.835294   47296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 02:54:11.835484   47296 out.go:298] Setting JSON to false
	I0501 02:54:11.835510   47296 mustload.go:65] Loading cluster: multinode-545652
	I0501 02:54:11.835636   47296 notify.go:220] Checking for updates...
	I0501 02:54:11.835960   47296 config.go:182] Loaded profile config "multinode-545652": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 02:54:11.835976   47296 status.go:255] checking status of multinode-545652 ...
	I0501 02:54:11.836377   47296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:54:11.836463   47296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:54:11.852678   47296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I0501 02:54:11.853079   47296 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:54:11.853780   47296 main.go:141] libmachine: Using API Version  1
	I0501 02:54:11.853819   47296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:54:11.854210   47296 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:54:11.854439   47296 main.go:141] libmachine: (multinode-545652) Calling .GetState
	I0501 02:54:11.855982   47296 status.go:330] multinode-545652 host status = "Running" (err=<nil>)
	I0501 02:54:11.856001   47296 host.go:66] Checking if "multinode-545652" exists ...
	I0501 02:54:11.856327   47296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:54:11.856366   47296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:54:11.871843   47296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I0501 02:54:11.872233   47296 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:54:11.872690   47296 main.go:141] libmachine: Using API Version  1
	I0501 02:54:11.872723   47296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:54:11.873027   47296 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:54:11.873244   47296 main.go:141] libmachine: (multinode-545652) Calling .GetIP
	I0501 02:54:11.875732   47296 main.go:141] libmachine: (multinode-545652) DBG | domain multinode-545652 has defined MAC address 52:54:00:01:77:73 in network mk-multinode-545652
	I0501 02:54:11.876140   47296 main.go:141] libmachine: (multinode-545652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:77:73", ip: ""} in network mk-multinode-545652: {Iface:virbr1 ExpiryTime:2024-05-01 03:51:38 +0000 UTC Type:0 Mac:52:54:00:01:77:73 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-545652 Clientid:01:52:54:00:01:77:73}
	I0501 02:54:11.876168   47296 main.go:141] libmachine: (multinode-545652) DBG | domain multinode-545652 has defined IP address 192.168.39.13 and MAC address 52:54:00:01:77:73 in network mk-multinode-545652
	I0501 02:54:11.876291   47296 host.go:66] Checking if "multinode-545652" exists ...
	I0501 02:54:11.876654   47296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:54:11.876698   47296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:54:11.890922   47296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43275
	I0501 02:54:11.891303   47296 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:54:11.891714   47296 main.go:141] libmachine: Using API Version  1
	I0501 02:54:11.891737   47296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:54:11.892022   47296 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:54:11.892229   47296 main.go:141] libmachine: (multinode-545652) Calling .DriverName
	I0501 02:54:11.892388   47296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:54:11.892411   47296 main.go:141] libmachine: (multinode-545652) Calling .GetSSHHostname
	I0501 02:54:11.894496   47296 main.go:141] libmachine: (multinode-545652) DBG | domain multinode-545652 has defined MAC address 52:54:00:01:77:73 in network mk-multinode-545652
	I0501 02:54:11.894858   47296 main.go:141] libmachine: (multinode-545652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:77:73", ip: ""} in network mk-multinode-545652: {Iface:virbr1 ExpiryTime:2024-05-01 03:51:38 +0000 UTC Type:0 Mac:52:54:00:01:77:73 Iaid: IPaddr:192.168.39.13 Prefix:24 Hostname:multinode-545652 Clientid:01:52:54:00:01:77:73}
	I0501 02:54:11.894889   47296 main.go:141] libmachine: (multinode-545652) DBG | domain multinode-545652 has defined IP address 192.168.39.13 and MAC address 52:54:00:01:77:73 in network mk-multinode-545652
	I0501 02:54:11.894966   47296 main.go:141] libmachine: (multinode-545652) Calling .GetSSHPort
	I0501 02:54:11.895357   47296 main.go:141] libmachine: (multinode-545652) Calling .GetSSHKeyPath
	I0501 02:54:11.895519   47296 main.go:141] libmachine: (multinode-545652) Calling .GetSSHUsername
	I0501 02:54:11.895649   47296 sshutil.go:53] new ssh client: &{IP:192.168.39.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/multinode-545652/id_rsa Username:docker}
	I0501 02:54:11.978807   47296 ssh_runner.go:195] Run: systemctl --version
	I0501 02:54:11.986050   47296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:54:12.005427   47296 kubeconfig.go:125] found "multinode-545652" server: "https://192.168.39.13:8443"
	I0501 02:54:12.005463   47296 api_server.go:166] Checking apiserver status ...
	I0501 02:54:12.005496   47296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0501 02:54:12.021506   47296 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0501 02:54:12.033242   47296 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0501 02:54:12.033289   47296 ssh_runner.go:195] Run: ls
	I0501 02:54:12.038126   47296 api_server.go:253] Checking apiserver healthz at https://192.168.39.13:8443/healthz ...
	I0501 02:54:12.042225   47296 api_server.go:279] https://192.168.39.13:8443/healthz returned 200:
	ok
	I0501 02:54:12.042245   47296 status.go:422] multinode-545652 apiserver status = Running (err=<nil>)
	I0501 02:54:12.042257   47296 status.go:257] multinode-545652 status: &{Name:multinode-545652 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:54:12.042288   47296 status.go:255] checking status of multinode-545652-m02 ...
	I0501 02:54:12.042672   47296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:54:12.042714   47296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:54:12.057431   47296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38541
	I0501 02:54:12.057820   47296 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:54:12.058295   47296 main.go:141] libmachine: Using API Version  1
	I0501 02:54:12.058329   47296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:54:12.058602   47296 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:54:12.058784   47296 main.go:141] libmachine: (multinode-545652-m02) Calling .GetState
	I0501 02:54:12.060191   47296 status.go:330] multinode-545652-m02 host status = "Running" (err=<nil>)
	I0501 02:54:12.060204   47296 host.go:66] Checking if "multinode-545652-m02" exists ...
	I0501 02:54:12.060469   47296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:54:12.060500   47296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:54:12.074592   47296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I0501 02:54:12.074989   47296 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:54:12.075434   47296 main.go:141] libmachine: Using API Version  1
	I0501 02:54:12.075451   47296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:54:12.075734   47296 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:54:12.075922   47296 main.go:141] libmachine: (multinode-545652-m02) Calling .GetIP
	I0501 02:54:12.078379   47296 main.go:141] libmachine: (multinode-545652-m02) DBG | domain multinode-545652-m02 has defined MAC address 52:54:00:36:ad:21 in network mk-multinode-545652
	I0501 02:54:12.078768   47296 main.go:141] libmachine: (multinode-545652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:ad:21", ip: ""} in network mk-multinode-545652: {Iface:virbr1 ExpiryTime:2024-05-01 03:52:41 +0000 UTC Type:0 Mac:52:54:00:36:ad:21 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-545652-m02 Clientid:01:52:54:00:36:ad:21}
	I0501 02:54:12.078796   47296 main.go:141] libmachine: (multinode-545652-m02) DBG | domain multinode-545652-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:36:ad:21 in network mk-multinode-545652
	I0501 02:54:12.078919   47296 host.go:66] Checking if "multinode-545652-m02" exists ...
	I0501 02:54:12.079241   47296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:54:12.079274   47296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:54:12.094687   47296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I0501 02:54:12.095216   47296 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:54:12.096712   47296 main.go:141] libmachine: Using API Version  1
	I0501 02:54:12.096742   47296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:54:12.097049   47296 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:54:12.097243   47296 main.go:141] libmachine: (multinode-545652-m02) Calling .DriverName
	I0501 02:54:12.097391   47296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0501 02:54:12.097406   47296 main.go:141] libmachine: (multinode-545652-m02) Calling .GetSSHHostname
	I0501 02:54:12.099631   47296 main.go:141] libmachine: (multinode-545652-m02) DBG | domain multinode-545652-m02 has defined MAC address 52:54:00:36:ad:21 in network mk-multinode-545652
	I0501 02:54:12.099996   47296 main.go:141] libmachine: (multinode-545652-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:ad:21", ip: ""} in network mk-multinode-545652: {Iface:virbr1 ExpiryTime:2024-05-01 03:52:41 +0000 UTC Type:0 Mac:52:54:00:36:ad:21 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:multinode-545652-m02 Clientid:01:52:54:00:36:ad:21}
	I0501 02:54:12.100023   47296 main.go:141] libmachine: (multinode-545652-m02) DBG | domain multinode-545652-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:36:ad:21 in network mk-multinode-545652
	I0501 02:54:12.100180   47296 main.go:141] libmachine: (multinode-545652-m02) Calling .GetSSHPort
	I0501 02:54:12.100343   47296 main.go:141] libmachine: (multinode-545652-m02) Calling .GetSSHKeyPath
	I0501 02:54:12.100464   47296 main.go:141] libmachine: (multinode-545652-m02) Calling .GetSSHUsername
	I0501 02:54:12.100559   47296 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18779-13407/.minikube/machines/multinode-545652-m02/id_rsa Username:docker}
	I0501 02:54:12.183398   47296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0501 02:54:12.202441   47296 status.go:257] multinode-545652-m02 status: &{Name:multinode-545652-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0501 02:54:12.202474   47296 status.go:255] checking status of multinode-545652-m03 ...
	I0501 02:54:12.202807   47296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 02:54:12.202850   47296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 02:54:12.217593   47296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44017
	I0501 02:54:12.218001   47296 main.go:141] libmachine: () Calling .GetVersion
	I0501 02:54:12.218400   47296 main.go:141] libmachine: Using API Version  1
	I0501 02:54:12.218427   47296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 02:54:12.218749   47296 main.go:141] libmachine: () Calling .GetMachineName
	I0501 02:54:12.218940   47296 main.go:141] libmachine: (multinode-545652-m03) Calling .GetState
	I0501 02:54:12.220493   47296 status.go:330] multinode-545652-m03 host status = "Stopped" (err=<nil>)
	I0501 02:54:12.220507   47296 status.go:343] host is not running, skipping remaining checks
	I0501 02:54:12.220516   47296 status.go:257] multinode-545652-m03 status: &{Name:multinode-545652-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-545652 node start m03 -v=7 --alsologtostderr: (25.757770259s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (303.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-545652
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-545652
E0501 02:55:16.314563   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 02:57:38.815311   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-545652: (3m5.382268178s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-545652 --wait=true -v=8 --alsologtostderr
E0501 02:58:19.360123   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-545652 --wait=true -v=8 --alsologtostderr: (1m57.705569633s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-545652
--- PASS: TestMultiNode/serial/RestartKeepsNodes (303.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-545652 node delete m03: (1.621202529s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 stop
E0501 03:00:16.314773   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 03:02:38.815269   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-545652 stop: (3m3.899707557s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-545652 status: exit status 7 (90.347317ms)

                                                
                                                
-- stdout --
	multinode-545652
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-545652-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-545652 status --alsologtostderr: exit status 7 (87.88693ms)

                                                
                                                
-- stdout --
	multinode-545652
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-545652-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 03:02:48.013498   49971 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:02:48.013663   49971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:02:48.013680   49971 out.go:304] Setting ErrFile to fd 2...
	I0501 03:02:48.013690   49971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:02:48.014117   49971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 03:02:48.014352   49971 out.go:298] Setting JSON to false
	I0501 03:02:48.014379   49971 mustload.go:65] Loading cluster: multinode-545652
	I0501 03:02:48.014486   49971 notify.go:220] Checking for updates...
	I0501 03:02:48.014738   49971 config.go:182] Loaded profile config "multinode-545652": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 03:02:48.014752   49971 status.go:255] checking status of multinode-545652 ...
	I0501 03:02:48.015104   49971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 03:02:48.015155   49971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:02:48.029623   49971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38483
	I0501 03:02:48.030063   49971 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:02:48.030582   49971 main.go:141] libmachine: Using API Version  1
	I0501 03:02:48.030603   49971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:02:48.031013   49971 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:02:48.031219   49971 main.go:141] libmachine: (multinode-545652) Calling .GetState
	I0501 03:02:48.032774   49971 status.go:330] multinode-545652 host status = "Stopped" (err=<nil>)
	I0501 03:02:48.032788   49971 status.go:343] host is not running, skipping remaining checks
	I0501 03:02:48.032793   49971 status.go:257] multinode-545652 status: &{Name:multinode-545652 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0501 03:02:48.032816   49971 status.go:255] checking status of multinode-545652-m02 ...
	I0501 03:02:48.033075   49971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0501 03:02:48.033105   49971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0501 03:02:48.047100   49971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I0501 03:02:48.047472   49971 main.go:141] libmachine: () Calling .GetVersion
	I0501 03:02:48.047835   49971 main.go:141] libmachine: Using API Version  1
	I0501 03:02:48.047856   49971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0501 03:02:48.048105   49971 main.go:141] libmachine: () Calling .GetMachineName
	I0501 03:02:48.048263   49971 main.go:141] libmachine: (multinode-545652-m02) Calling .GetState
	I0501 03:02:48.049692   49971 status.go:330] multinode-545652-m02 host status = "Stopped" (err=<nil>)
	I0501 03:02:48.049707   49971 status.go:343] host is not running, skipping remaining checks
	I0501 03:02:48.049713   49971 status.go:257] multinode-545652-m02 status: &{Name:multinode-545652-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-545652 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-545652 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m22.467115986s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-545652 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-545652
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-545652-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-545652-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (74.890137ms)

                                                
                                                
-- stdout --
	* [multinode-545652-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-545652-m02' is duplicated with machine name 'multinode-545652-m02' in profile 'multinode-545652'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-545652-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-545652-m03 --driver=kvm2  --container-runtime=containerd: (47.354881183s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-545652
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-545652: exit status 80 (228.34432ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-545652 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-545652-m03 already exists in multinode-545652-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-545652-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.67s)

                                                
                                    
x
+
TestPreload (270.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-755837 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0501 03:05:16.315414   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
E0501 03:07:21.861845   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 03:07:38.813732   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-755837 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (3m9.601748728s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-755837 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-755837 image pull gcr.io/k8s-minikube/busybox: (2.871218218s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-755837
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-755837: (3.302198997s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-755837 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-755837 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m13.327469633s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-755837 image list
helpers_test.go:175: Cleaning up "test-preload-755837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-755837
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-755837: (1.011769575s)
--- PASS: TestPreload (270.35s)

                                                
                                    
x
+
TestScheduledStopUnix (116.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-861292 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0501 03:10:16.315127   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-861292 --memory=2048 --driver=kvm2  --container-runtime=containerd: (44.927184797s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-861292 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-861292 -n scheduled-stop-861292
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-861292 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-861292 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-861292 -n scheduled-stop-861292
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-861292
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-861292 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-861292
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-861292: exit status 7 (75.276316ms)

                                                
                                                
-- stdout --
	scheduled-stop-861292
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-861292 -n scheduled-stop-861292
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-861292 -n scheduled-stop-861292: exit status 7 (74.332929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-861292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-861292
--- PASS: TestScheduledStopUnix (116.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (211.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.228005422 start -p running-upgrade-788212 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0501 03:12:38.813486   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.228005422 start -p running-upgrade-788212 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m11.970566203s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-788212 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-788212 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m13.992584155s)
helpers_test.go:175: Cleaning up "running-upgrade-788212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-788212
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-788212: (1.309400279s)
--- PASS: TestRunningBinaryUpgrade (211.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (222.83s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-088753 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-088753 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m7.13442048s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-088753
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-088753: (2.606927166s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-088753 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-088753 status --format={{.Host}}: exit status 7 (99.759416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-088753 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-088753 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m15.34278829s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-088753 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-088753 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-088753 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (88.859535ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-088753] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-088753
	    minikube start -p kubernetes-upgrade-088753 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0887532 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-088753 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-088753 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-088753 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m16.516271007s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-088753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-088753
--- PASS: TestKubernetesUpgrade (222.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-764922 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-764922 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (90.753842ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-764922] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestPause/serial/Start (83.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-104532 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-104532 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m23.907946689s)
--- PASS: TestPause/serial/Start (83.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (97.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-764922 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-764922 --driver=kvm2  --container-runtime=containerd: (1m37.641433508s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-764922 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (97.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (67.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-104532 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-104532 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m7.036263708s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (67.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-764922 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-764922 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (21.02116874s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-764922 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-764922 status -o json: exit status 2 (253.305297ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-764922","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-764922
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-764922: (1.005676655s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (37.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-764922 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-764922 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (37.117858474s)
--- PASS: TestNoKubernetes/serial/Start (37.12s)

                                                
                                    
x
+
TestPause/serial/Pause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-104532 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.77s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-104532 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-104532 --output=json --layout=cluster: exit status 2 (256.702971ms)

                                                
                                                
-- stdout --
	{"Name":"pause-104532","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-104532","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-104532 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-104532 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.84s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-104532 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.81s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.813360919s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-764922 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-764922 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.135532ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.075063728s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-764922
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-764922: (1.780746551s)
--- PASS: TestNoKubernetes/serial/Stop (1.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (47.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-764922 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-764922 --driver=kvm2  --container-runtime=containerd: (47.513752476s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (47.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-764922 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-764922 "sudo systemctl is-active --quiet service kubelet": exit status 1 (233.590469ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-572360 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-572360 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (174.953731ms)

                                                
                                                
-- stdout --
	* [false-572360] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18779
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0501 03:14:59.806964   57825 out.go:291] Setting OutFile to fd 1 ...
	I0501 03:14:59.807160   57825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:14:59.807172   57825 out.go:304] Setting ErrFile to fd 2...
	I0501 03:14:59.807179   57825 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0501 03:14:59.807608   57825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18779-13407/.minikube/bin
	I0501 03:14:59.808425   57825 out.go:298] Setting JSON to false
	I0501 03:14:59.809359   57825 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":7042,"bootTime":1714526258,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0501 03:14:59.809421   57825 start.go:139] virtualization: kvm guest
	I0501 03:14:59.811860   57825 out.go:177] * [false-572360] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0501 03:14:59.813528   57825 notify.go:220] Checking for updates...
	I0501 03:14:59.813533   57825 out.go:177]   - MINIKUBE_LOCATION=18779
	I0501 03:14:59.815115   57825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0501 03:14:59.816642   57825 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18779-13407/kubeconfig
	I0501 03:14:59.818094   57825 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18779-13407/.minikube
	I0501 03:14:59.819645   57825 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0501 03:14:59.821104   57825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0501 03:14:59.823147   57825 config.go:182] Loaded profile config "NoKubernetes-764922": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0501 03:14:59.823281   57825 config.go:182] Loaded profile config "kubernetes-upgrade-088753": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.30.0
	I0501 03:14:59.823401   57825 driver.go:392] Setting default libvirt URI to qemu:///system
	I0501 03:14:59.909055   57825 out.go:177] * Using the kvm2 driver based on user configuration
	I0501 03:14:59.910443   57825 start.go:297] selected driver: kvm2
	I0501 03:14:59.910468   57825 start.go:901] validating driver "kvm2" against <nil>
	I0501 03:14:59.910482   57825 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0501 03:14:59.912774   57825 out.go:177] 
	W0501 03:14:59.914131   57825 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0501 03:14:59.915316   57825 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-572360 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-572360" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-572360" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-572360

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-572360"

                                                
                                                
----------------------- debugLogs end: false-572360 [took: 3.128161071s] --------------------------------
helpers_test.go:175: Cleaning up "false-572360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-572360
--- PASS: TestNetworkPlugins/group/false (3.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (172.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.933192916 start -p stopped-upgrade-059542 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.933192916 start -p stopped-upgrade-059542 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m8.238317016s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.933192916 -p stopped-upgrade-059542 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.933192916 -p stopped-upgrade-059542 stop: (2.311809409s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-059542 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-059542 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m41.691379917s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (172.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (205.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-723093 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0501 03:17:38.814031   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-723093 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (3m25.057136472s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (205.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (117.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-601721 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-601721 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (1m57.586478209s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (117.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-059542
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-059542: (1.602462815s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (112.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-642508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-642508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (1m52.040672226s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (112.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-263973 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-263973 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (1m0.769788997s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-642508 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [751e64f1-f70f-4d88-a2ad-2c811a867614] Pending
helpers_test.go:344: "busybox" [751e64f1-f70f-4d88-a2ad-2c811a867614] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [751e64f1-f70f-4d88-a2ad-2c811a867614] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005693114s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-642508 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-601721 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f54a57bb-b808-413c-85f9-1713f26bcbbc] Pending
helpers_test.go:344: "busybox" [f54a57bb-b808-413c-85f9-1713f26bcbbc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f54a57bb-b808-413c-85f9-1713f26bcbbc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005131578s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-601721 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-642508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-642508 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.019497986s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-642508 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-642508 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-642508 --alsologtostderr -v=3: (1m32.497691145s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-601721 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-601721 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-601721 --alsologtostderr -v=3
E0501 03:20:16.315255   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-601721 --alsologtostderr -v=3: (1m32.490560258s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-723093 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9199f28d-6413-4da8-9ba3-d11ae22a0860] Pending
helpers_test.go:344: "busybox" [9199f28d-6413-4da8-9ba3-d11ae22a0860] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9199f28d-6413-4da8-9ba3-d11ae22a0860] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004250714s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-723093 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-723093 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-723093 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-723093 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-723093 --alsologtostderr -v=3: (1m32.473343694s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-263973 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [545c8686-0020-4d84-963c-324a6539b292] Pending
helpers_test.go:344: "busybox" [545c8686-0020-4d84-963c-324a6539b292] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [545c8686-0020-4d84-963c-324a6539b292] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003932507s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-263973 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-263973 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-263973 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-263973 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-263973 --alsologtostderr -v=3: (1m32.478166944s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-642508 -n embed-certs-642508
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-642508 -n embed-certs-642508: exit status 7 (74.455181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-642508 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (295.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-642508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-642508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (4m55.648881333s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-642508 -n embed-certs-642508
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (295.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-601721 -n no-preload-601721
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-601721 -n no-preload-601721: exit status 7 (78.280151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-601721 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (332.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-601721 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-601721 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (5m31.742178178s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-601721 -n no-preload-601721
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (332.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-723093 -n old-k8s-version-723093
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-723093 -n old-k8s-version-723093: exit status 7 (75.516116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-723093 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (485.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-723093 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-723093 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (8m5.24879284s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-723093 -n old-k8s-version-723093
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (485.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-263973 -n default-k8s-diff-port-263973
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-263973 -n default-k8s-diff-port-263973: exit status 7 (101.227548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-263973 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (333.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-263973 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
E0501 03:22:38.813321   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 03:24:01.863110   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
E0501 03:25:16.315112   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-263973 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (5m33.320833881s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-263973 -n default-k8s-diff-port-263973
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (333.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-grxnl" [c6d107bd-14f5-4291-9be5-431185fa6550] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006098971s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-grxnl" [c6d107bd-14f5-4291-9be5-431185fa6550] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005385763s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-642508 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-642508 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-642508 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-642508 -n embed-certs-642508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-642508 -n embed-certs-642508: exit status 2 (281.594696ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-642508 -n embed-certs-642508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-642508 -n embed-certs-642508: exit status 2 (268.712255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-642508 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-642508 -n embed-certs-642508
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-642508 -n embed-certs-642508
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-256078 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-256078 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (1m1.600685889s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-42tb7" [d80ded6e-e5df-4dd4-a198-ef9258a3ec72] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-42tb7" [d80ded6e-e5df-4dd4-a198-ef9258a3ec72] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.011611309s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-42tb7" [d80ded6e-e5df-4dd4-a198-ef9258a3ec72] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005315449s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-601721 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-601721 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-601721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-601721 -n no-preload-601721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-601721 -n no-preload-601721: exit status 2 (288.649782ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-601721 -n no-preload-601721
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-601721 -n no-preload-601721: exit status 2 (276.173462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-601721 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-601721 -n no-preload-601721
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-601721 -n no-preload-601721
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0501 03:27:38.813871   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/addons-753721/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m41.187593194s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-256078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-256078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.284582819s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-256078 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-256078 --alsologtostderr -v=3: (7.37150735s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-256078 -n newest-cni-256078
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-256078 -n newest-cni-256078: exit status 7 (94.410044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-256078 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (41.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-256078 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-256078 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.30.0: (41.620299219s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-256078 -n newest-cni-256078
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (41.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wwkd4" [20c28aac-d702-4c64-b846-ece04cbe8382] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wwkd4" [20c28aac-d702-4c64-b846-ece04cbe8382] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.005644425s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wwkd4" [20c28aac-d702-4c64-b846-ece04cbe8382] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005458801s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-263973 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-263973 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-263973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-263973 --alsologtostderr -v=1: (1.004479195s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-263973 -n default-k8s-diff-port-263973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-263973 -n default-k8s-diff-port-263973: exit status 2 (265.803406ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-263973 -n default-k8s-diff-port-263973
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-263973 -n default-k8s-diff-port-263973: exit status 2 (254.75788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-263973 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-263973 -n default-k8s-diff-port-263973
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-263973 -n default-k8s-diff-port-263973
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m9.624986557s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-256078 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-256078 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-256078 -n newest-cni-256078
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-256078 -n newest-cni-256078: exit status 2 (292.034005ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-256078 -n newest-cni-256078
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-256078 -n newest-cni-256078: exit status 2 (282.934272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-256078 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-256078 -n newest-cni-256078
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-256078 -n newest-cni-256078
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (119.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m59.793389194s)
--- PASS: TestNetworkPlugins/group/calico/Start (119.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-572360 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-572360 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jssnc" [0eded827-687c-44ef-b325-ecd0f9af9c30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jssnc" [0eded827-687c-44ef-b325-ecd0f9af9c30] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005221054s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-572360 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-792m6" [5718ccef-d6b3-4e50-aaec-691593fcd158] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007197672s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-572360 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-572360 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kqbwl" [d60dbe0e-0e21-474d-8222-2e438ab8b8d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kqbwl" [d60dbe0e-0e21-474d-8222-2e438ab8b8d1] Running
E0501 03:29:51.923056   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
E0501 03:29:51.928372   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
E0501 03:29:51.938606   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
E0501 03:29:51.958890   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
E0501 03:29:51.999156   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
E0501 03:29:52.079429   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
E0501 03:29:52.240566   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
E0501 03:29:52.561073   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
E0501 03:29:53.201971   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005043133s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m25.576000087s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-572360 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0501 03:29:54.482907   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (104.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0501 03:30:12.404347   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m44.908898986s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (104.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9ht45" [cc7abe69-6bc6-4e12-8900-7679b16c3b51] Running
E0501 03:30:16.315515   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/functional-167406/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.301432753s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9ht45" [cc7abe69-6bc6-4e12-8900-7679b16c3b51] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005432873s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-723093 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-723093 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-723093 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-723093 -n old-k8s-version-723093
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-723093 -n old-k8s-version-723093: exit status 2 (354.703351ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-723093 -n old-k8s-version-723093
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-723093 -n old-k8s-version-723093: exit status 2 (365.166701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-723093 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-723093 -n old-k8s-version-723093
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-723093 -n old-k8s-version-723093
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (102.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0501 03:30:32.884990   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m42.68598147s)
--- PASS: TestNetworkPlugins/group/flannel/Start (102.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rrjdr" [164bfa55-ba42-44d3-8ebc-2479e0111a5f] Running
E0501 03:30:42.529317   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
E0501 03:30:42.534619   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
E0501 03:30:42.544881   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
E0501 03:30:42.565338   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
E0501 03:30:42.606074   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
E0501 03:30:42.686394   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
E0501 03:30:42.846788   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
E0501 03:30:43.167881   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
E0501 03:30:43.809036   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
E0501 03:30:45.091290   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
E0501 03:30:47.651877   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008660637s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-572360 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-572360 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-srsjh" [9dc62d6e-d631-4152-8d75-e17b4e86d8d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0501 03:30:52.773134   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-srsjh" [9dc62d6e-d631-4152-8d75-e17b4e86d8d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005658835s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-572360 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-572360 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-572360 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-572360 replace --force -f testdata/netcat-deployment.yaml: (1.635958429s)
E0501 03:31:13.846208   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/no-preload-601721/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-59q76" [90d02fce-ed87-4d4c-ba8f-71da9a531e39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-59q76" [90d02fce-ed87-4d4c-ba8f-71da9a531e39] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005014849s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (103.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-572360 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m43.122364137s)
--- PASS: TestNetworkPlugins/group/bridge/Start (103.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-572360 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-572360 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-572360 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-v5jjf" [fe244d46-ca54-4881-9302-ca1ef88de201] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-v5jjf" [fe244d46-ca54-4881-9302-ca1ef88de201] Running
E0501 03:32:04.460341   20785 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/default-k8s-diff-port-263973/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00506026s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-572360 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6gm7p" [7e5390f7-f1d4-4f0c-bae6-405689f2f330] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004619913s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-572360 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-572360 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-w8zfz" [780b8c9c-3b6e-44c1-bc96-8842be7480df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-w8zfz" [780b8c9c-3b6e-44c1-bc96-8842be7480df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00461983s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-572360 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-572360 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-572360 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tc6fr" [2d6c2a39-4a49-452f-b778-e6564aeea194] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tc6fr" [2d6c2a39-4a49-452f-b778-e6564aeea194] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005613166s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-572360 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-572360 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (36/325)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0/cached-images 0
15 TestDownloadOnly/v1.30.0/binaries 0
16 TestDownloadOnly/v1.30.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.02
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
256 TestStartStop/group/disable-driver-mounts 0.16
276 TestNetworkPlugins/group/kubenet 3.56
287 TestNetworkPlugins/group/cilium 3.61
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-587621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-587621
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-572360 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-572360" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-572360" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18779-13407/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 01 May 2024 03:14:56 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.72.145:8443
name: running-upgrade-788212
contexts:
- context:
cluster: running-upgrade-788212
extensions:
- extension:
last-update: Wed, 01 May 2024 03:14:56 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: running-upgrade-788212
name: running-upgrade-788212
current-context: running-upgrade-788212
kind: Config
preferences: {}
users:
- name: running-upgrade-788212
user:
client-certificate: /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/running-upgrade-788212/client.crt
client-key: /home/jenkins/minikube-integration/18779-13407/.minikube/profiles/running-upgrade-788212/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-572360

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-572360"

                                                
                                                
----------------------- debugLogs end: kubenet-572360 [took: 3.419633369s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-572360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-572360
--- SKIP: TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-572360 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-572360" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-572360

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-572360" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-572360"

                                                
                                                
----------------------- debugLogs end: cilium-572360 [took: 3.458715274s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-572360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-572360
--- SKIP: TestNetworkPlugins/group/cilium (3.61s)

                                                
                                    
Copied to clipboard