Test Report: KVM_Linux_containerd 17586

                    
                      d1a75fe08206deb6fc1cd915add724f43e3a5600:2023-11-09:31801
                    
                

Test fail (13/306)

x
+
TestErrorSpam/setup (62.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-764351 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-764351 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-764351 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-764351 --driver=kvm2  --container-runtime=containerd: (1m2.630464752s)
error_spam_test.go:96: unexpected stderr: "X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17586-201782/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3: no such file or directory"
error_spam_test.go:110: minikube stdout:
* [nospam-764351] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17586
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node nospam-764351 in cluster nospam-764351
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.3 on containerd 1.7.8 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-764351" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17586-201782/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3: no such file or directory
--- FAIL: TestErrorSpam/setup (62.63s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (71.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-400359 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1108 23:44:26.357822  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
functional_test.go:753: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-400359 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (41.815253098s)

                                                
                                                
-- stdout --
	* [functional-400359] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node functional-400359 in cluster functional-400359
	* Updating the running kvm2 "functional-400359" VM ...
	* Preparing Kubernetes v1.28.3 on containerd 1.7.8 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 23:44:41.201176  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:41.201841  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-wv6f7" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wv6f7": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:41.202362  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:41.216977  213888 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.39.189:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:41.341045  213888 start.go:891] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IPX Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-400359": Get "https://192.168.39.189:8441/api/v1/nodes/functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-400359 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 41.815503197s for "functional-400359" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-400359 -n functional-400359
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-400359 -n functional-400359: exit status 2 (13.745374363s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 logs -n 25: (1.549516599s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-764351                                                         | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	| start   | -p functional-400359                                                     | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:43 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                                                 |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-400359                                                     | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | minikube-local-cache-test:functional-400359                              |                   |         |         |                     |                     |
	| cache   | functional-400359 cache delete                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | minikube-local-cache-test:functional-400359                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	| ssh     | functional-400359 ssh sudo                                               | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-400359                                                        | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-400359 ssh                                                    | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-400359 cache reload                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	| ssh     | functional-400359 ssh                                                    | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-400359 kubectl --                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | --context functional-400359                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-400359                                                     | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 23:43:59
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 23:43:59.599157  213888 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:43:59.599412  213888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:43:59.599416  213888 out.go:309] Setting ErrFile to fd 2...
	I1108 23:43:59.599420  213888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:43:59.599606  213888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
	I1108 23:43:59.600217  213888 out.go:303] Setting JSON to false
	I1108 23:43:59.601119  213888 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":23194,"bootTime":1699463846,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 23:43:59.601189  213888 start.go:138] virtualization: kvm guest
	I1108 23:43:59.603447  213888 out.go:177] * [functional-400359] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 23:43:59.605356  213888 notify.go:220] Checking for updates...
	I1108 23:43:59.605376  213888 out.go:177]   - MINIKUBE_LOCATION=17586
	I1108 23:43:59.607074  213888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:43:59.608704  213888 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:43:59.610319  213888 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	I1108 23:43:59.611947  213888 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 23:43:59.613523  213888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 23:43:59.615400  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:43:59.615477  213888 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:43:59.615864  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:43:59.615909  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:43:59.631683  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I1108 23:43:59.632150  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:43:59.632691  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:43:59.632708  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:43:59.633075  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:43:59.633250  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.666922  213888 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 23:43:59.668639  213888 start.go:298] selected driver: kvm2
	I1108 23:43:59.668648  213888 start.go:902] validating driver "kvm2" against &{Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400
359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:43:59.668789  213888 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 23:43:59.669167  213888 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:43:59.669241  213888 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17586-201782/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 23:43:59.685241  213888 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 23:43:59.685958  213888 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 23:43:59.686030  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:43:59.686038  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:43:59.686047  213888 start_flags.go:323] config:
	{Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:43:59.686238  213888 iso.go:125] acquiring lock: {Name:mk33479b76ec6919fe69628bcf9e99f9786f49af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:43:59.688123  213888 out.go:177] * Starting control plane node functional-400359 in cluster functional-400359
	I1108 23:43:59.689492  213888 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:43:59.689531  213888 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4
	I1108 23:43:59.689548  213888 cache.go:56] Caching tarball of preloaded images
	I1108 23:43:59.689653  213888 preload.go:174] Found /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1108 23:43:59.689661  213888 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1108 23:43:59.689851  213888 profile.go:148] Saving config to /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/config.json ...
	I1108 23:43:59.690069  213888 start.go:365] acquiring machines lock for functional-400359: {Name:mkc58a906fd9c58de0776efcd0f08335945567ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 23:43:59.690115  213888 start.go:369] acquired machines lock for "functional-400359" in 32.532µs
	I1108 23:43:59.690130  213888 start.go:96] Skipping create...Using existing machine configuration
	I1108 23:43:59.690134  213888 fix.go:54] fixHost starting: 
	I1108 23:43:59.690432  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:43:59.690465  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:43:59.706016  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I1108 23:43:59.706457  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:43:59.706983  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:43:59.707003  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:43:59.707316  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:43:59.707534  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.707715  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:43:59.709629  213888 fix.go:102] recreateIfNeeded on functional-400359: state=Running err=<nil>
	W1108 23:43:59.709665  213888 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 23:43:59.711868  213888 out.go:177] * Updating the running kvm2 "functional-400359" VM ...
	I1108 23:43:59.713307  213888 machine.go:88] provisioning docker machine ...
	I1108 23:43:59.713332  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.713637  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.713880  213888 buildroot.go:166] provisioning hostname "functional-400359"
	I1108 23:43:59.713899  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.714053  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:43:59.716647  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.717013  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:43:59.717073  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.717195  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:43:59.717406  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.717589  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.717824  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:43:59.718013  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:43:59.718360  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:43:59.718370  213888 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-400359 && echo "functional-400359" | sudo tee /etc/hostname
	I1108 23:43:59.863990  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-400359
	
	I1108 23:43:59.864012  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:43:59.866908  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.867252  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:43:59.867363  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.867442  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:43:59.867690  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.867850  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.867996  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:43:59.868145  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:43:59.868492  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:43:59.868503  213888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-400359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-400359/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-400359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 23:43:59.999382  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 23:43:59.999410  213888 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17586-201782/.minikube CaCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17586-201782/.minikube}
	I1108 23:43:59.999434  213888 buildroot.go:174] setting up certificates
	I1108 23:43:59.999445  213888 provision.go:83] configureAuth start
	I1108 23:43:59.999455  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.999781  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:00.002662  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.002978  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.003014  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.003248  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.005651  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.006085  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.006106  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.006287  213888 provision.go:138] copyHostCerts
	I1108 23:44:00.006374  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem, removing ...
	I1108 23:44:00.006389  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem
	I1108 23:44:00.006451  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem (1078 bytes)
	I1108 23:44:00.006581  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem, removing ...
	I1108 23:44:00.006587  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem
	I1108 23:44:00.006617  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem (1123 bytes)
	I1108 23:44:00.006719  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem, removing ...
	I1108 23:44:00.006724  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem
	I1108 23:44:00.006742  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem (1679 bytes)
	I1108 23:44:00.006784  213888 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem org=jenkins.functional-400359 san=[192.168.39.189 192.168.39.189 localhost 127.0.0.1 minikube functional-400359]
	I1108 23:44:00.203873  213888 provision.go:172] copyRemoteCerts
	I1108 23:44:00.203931  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 23:44:00.203956  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.206797  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.207094  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.207119  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.207305  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.207516  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.207692  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.207814  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.301445  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 23:44:00.331684  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 23:44:00.361187  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 23:44:00.388214  213888 provision.go:86] duration metric: configureAuth took 388.751766ms
	I1108 23:44:00.388241  213888 buildroot.go:189] setting minikube options for container-runtime
	I1108 23:44:00.388477  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:44:00.388484  213888 machine.go:91] provisioned docker machine in 675.168638ms
	I1108 23:44:00.388492  213888 start.go:300] post-start starting for "functional-400359" (driver="kvm2")
	I1108 23:44:00.388500  213888 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 23:44:00.388535  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.388924  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 23:44:00.388948  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.391561  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.391940  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.391967  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.392105  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.392316  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.392453  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.392611  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.488199  213888 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 23:44:00.492976  213888 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 23:44:00.492992  213888 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-201782/.minikube/addons for local assets ...
	I1108 23:44:00.493051  213888 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-201782/.minikube/files for local assets ...
	I1108 23:44:00.493113  213888 filesync.go:149] local asset: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem -> 2089632.pem in /etc/ssl/certs
	I1108 23:44:00.493174  213888 filesync.go:149] local asset: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/test/nested/copy/208963/hosts -> hosts in /etc/test/nested/copy/208963
	I1108 23:44:00.493206  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/208963
	I1108 23:44:00.501656  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem --> /etc/ssl/certs/2089632.pem (1708 bytes)
	I1108 23:44:00.525422  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/test/nested/copy/208963/hosts --> /etc/test/nested/copy/208963/hosts (40 bytes)
	I1108 23:44:00.548996  213888 start.go:303] post-start completed in 160.490436ms
	I1108 23:44:00.549028  213888 fix.go:56] fixHost completed within 858.891713ms
	I1108 23:44:00.549103  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.551962  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.552311  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.552329  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.552563  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.552735  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.552911  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.553036  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.553160  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:44:00.553504  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:44:00.553510  213888 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 23:44:00.679007  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699487040.675193612
	
	I1108 23:44:00.679025  213888 fix.go:206] guest clock: 1699487040.675193612
	I1108 23:44:00.679031  213888 fix.go:219] Guest: 2023-11-08 23:44:00.675193612 +0000 UTC Remote: 2023-11-08 23:44:00.549031363 +0000 UTC m=+1.003889169 (delta=126.162249ms)
	I1108 23:44:00.679051  213888 fix.go:190] guest clock delta is within tolerance: 126.162249ms
	I1108 23:44:00.679055  213888 start.go:83] releasing machines lock for "functional-400359", held for 988.934098ms
	I1108 23:44:00.679080  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.679402  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:00.682635  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.683021  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.683048  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.683271  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.683917  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.684098  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.684213  213888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 23:44:00.684252  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.684416  213888 ssh_runner.go:195] Run: cat /version.json
	I1108 23:44:00.684440  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.687054  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687399  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.687426  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687449  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687587  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.687788  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.687907  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.687935  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687948  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.688119  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.688118  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.688285  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.688448  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.688589  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.802586  213888 ssh_runner.go:195] Run: systemctl --version
	I1108 23:44:00.808787  213888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 23:44:00.814779  213888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 23:44:00.814850  213888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 23:44:00.824904  213888 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 23:44:00.824923  213888 start.go:472] detecting cgroup driver to use...
	I1108 23:44:00.824994  213888 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1108 23:44:00.839653  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1108 23:44:00.852631  213888 docker.go:203] disabling cri-docker service (if available) ...
	I1108 23:44:00.852687  213888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 23:44:00.865664  213888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 23:44:00.878442  213888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 23:44:01.013896  213888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 23:44:01.176298  213888 docker.go:219] disabling docker service ...
	I1108 23:44:01.176368  213888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 23:44:01.191617  213888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 23:44:01.205423  213888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 23:44:01.352320  213888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 23:44:01.505796  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 23:44:01.520373  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 23:44:01.539920  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1108 23:44:01.552198  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1108 23:44:01.564553  213888 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1108 23:44:01.564634  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1108 23:44:01.577530  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:44:01.589460  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1108 23:44:01.601621  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:44:01.615054  213888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 23:44:01.626891  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1108 23:44:01.638637  213888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 23:44:01.649235  213888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 23:44:01.660480  213888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 23:44:01.793850  213888 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1108 23:44:01.824923  213888 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1108 23:44:01.824991  213888 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1108 23:44:01.831130  213888 retry.go:31] will retry after 821.206397ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1108 23:44:02.653187  213888 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1108 23:44:02.660143  213888 start.go:540] Will wait 60s for crictl version
	I1108 23:44:02.660193  213888 ssh_runner.go:195] Run: which crictl
	I1108 23:44:02.665280  213888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 23:44:02.711632  213888 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.8
	RuntimeApiVersion:  v1
	I1108 23:44:02.711708  213888 ssh_runner.go:195] Run: containerd --version
	I1108 23:44:02.742401  213888 ssh_runner.go:195] Run: containerd --version
	I1108 23:44:02.772662  213888 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.7.8 ...
	I1108 23:44:02.774143  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:02.776902  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:02.777294  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:02.777321  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:02.777524  213888 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 23:44:02.784598  213888 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1108 23:44:02.786474  213888 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:44:02.786612  213888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:44:02.834765  213888 containerd.go:604] all images are preloaded for containerd runtime.
	I1108 23:44:02.834781  213888 containerd.go:518] Images already preloaded, skipping extraction
	I1108 23:44:02.834839  213888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:44:02.877779  213888 containerd.go:604] all images are preloaded for containerd runtime.
	I1108 23:44:02.877797  213888 cache_images.go:84] Images are preloaded, skipping loading
	I1108 23:44:02.877870  213888 ssh_runner.go:195] Run: sudo crictl info
	I1108 23:44:02.924597  213888 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1108 23:44:02.924626  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:44:02.924635  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:44:02.924644  213888 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 23:44:02.924661  213888 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8441 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-400359 NodeName:functional-400359 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubele
tConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 23:44:02.924813  213888 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-400359"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.189
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 23:44:02.924893  213888 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-400359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1108 23:44:02.924953  213888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 23:44:02.936489  213888 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 23:44:02.936562  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 23:44:02.947183  213888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I1108 23:44:02.966007  213888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 23:44:02.985587  213888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1962 bytes)
	I1108 23:44:03.005107  213888 ssh_runner.go:195] Run: grep 192.168.39.189	control-plane.minikube.internal$ /etc/hosts
	I1108 23:44:03.010099  213888 certs.go:56] Setting up /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359 for IP: 192.168.39.189
	I1108 23:44:03.010128  213888 certs.go:190] acquiring lock for shared ca certs: {Name:mk39cbc6402159d6a738802f6361f72eac5d34d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:03.010382  213888 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17586-201782/.minikube/ca.key
	I1108 23:44:03.010425  213888 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.key
	I1108 23:44:03.010497  213888 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.key
	I1108 23:44:03.010540  213888 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.key.3964182b
	I1108 23:44:03.010588  213888 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.key
	I1108 23:44:03.010739  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963.pem (1338 bytes)
	W1108 23:44:03.010780  213888 certs.go:433] ignoring /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963_empty.pem, impossibly tiny 0 bytes
	I1108 23:44:03.010790  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 23:44:03.010822  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem (1078 bytes)
	I1108 23:44:03.010853  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem (1123 bytes)
	I1108 23:44:03.010885  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem (1679 bytes)
	I1108 23:44:03.010944  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem (1708 bytes)
	I1108 23:44:03.011800  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 23:44:03.052476  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 23:44:03.084167  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 23:44:03.113455  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 23:44:03.138855  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 23:44:03.170000  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 23:44:03.203207  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 23:44:03.233030  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 23:44:03.262431  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963.pem --> /usr/share/ca-certificates/208963.pem (1338 bytes)
	I1108 23:44:03.288670  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem --> /usr/share/ca-certificates/2089632.pem (1708 bytes)
	I1108 23:44:03.317344  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 23:44:03.345150  213888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 23:44:03.367221  213888 ssh_runner.go:195] Run: openssl version
	I1108 23:44:03.373631  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2089632.pem && ln -fs /usr/share/ca-certificates/2089632.pem /etc/ssl/certs/2089632.pem"
	I1108 23:44:03.388662  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.394338  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  8 23:42 /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.394401  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.400580  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2089632.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 23:44:03.412248  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 23:44:03.425515  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.430926  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  8 23:35 /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.430990  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.437443  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 23:44:03.447837  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208963.pem && ln -fs /usr/share/ca-certificates/208963.pem /etc/ssl/certs/208963.pem"
	I1108 23:44:03.461453  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.467398  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  8 23:42 /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.467478  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.474228  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/208963.pem /etc/ssl/certs/51391683.0"
	I1108 23:44:03.487446  213888 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 23:44:03.492652  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 23:44:03.499552  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 23:44:03.507193  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 23:44:03.514236  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 23:44:03.521522  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 23:44:03.527708  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 23:44:03.534082  213888 kubeadm.go:404] StartCluster: {Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:44:03.534196  213888 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1108 23:44:03.534267  213888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 23:44:03.584679  213888 cri.go:89] found id: "db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f"
	I1108 23:44:03.584695  213888 cri.go:89] found id: "e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5"
	I1108 23:44:03.584698  213888 cri.go:89] found id: "998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1"
	I1108 23:44:03.584701  213888 cri.go:89] found id: "daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9"
	I1108 23:44:03.584704  213888 cri.go:89] found id: "b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2"
	I1108 23:44:03.584707  213888 cri.go:89] found id: "46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573"
	I1108 23:44:03.584709  213888 cri.go:89] found id: "a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086"
	I1108 23:44:03.584711  213888 cri.go:89] found id: ""
	I1108 23:44:03.584767  213888 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1108 23:44:03.616378  213888 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","pid":1604,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9/rootfs","created":"2023-11-08T23:43:40.318157335Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wv6f7_7ab3ac5b-5a0e-462b-a171-08f507184dfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wv6f7","io.kubernetes.cri.sand
box-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7ab3ac5b-5a0e-462b-a171-08f507184dfa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","pid":1110,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1/rootfs","created":"2023-11-08T23:43:18.68773069Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-400359_faaa6dec7d9cbf75400a4930b93bdc7d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes
.cri.sandbox-name":"etcd-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"faaa6dec7d9cbf75400a4930b93bdc7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573","pid":1243,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573/rootfs","created":"2023-11-08T23:43:19.79473196Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","io.kubernetes.cri.sandbox-name":"etcd-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fa
aa6dec7d9cbf75400a4930b93bdc7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","pid":1137,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0/rootfs","created":"2023-11-08T23:43:18.759582Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-400359","io.ku
bernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"782fbbe1f7d627cd92711fb14a0b0813"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","pid":1799,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44/rootfs","created":"2023-11-08T23:43:41.584597939Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-tqvtr_b03be54f-57e6-4247-84ba-9545f9b1b4ed","io.kubernetes.cri.sandbox-memory
":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-tqvtr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b03be54f-57e6-4247-84ba-9545f9b1b4ed"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1","pid":1633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1/rootfs","created":"2023-11-08T23:43:40.529772065Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.3","io.kubernetes.cri.sandbox-id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","io.kubernetes.cri.sandbox-name":"kube-proxy-wv6f7","io.kubernetes.cri.sandbox-namespace":"kube-sy
stem","io.kubernetes.cri.sandbox-uid":"7ab3ac5b-5a0e-462b-a171-08f507184dfa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","pid":1160,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b/rootfs","created":"2023-11-08T23:43:18.813882118Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-400359_af28ec4ee73fcf841ab21630a0a61078","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox
-name":"kube-scheduler-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"af28ec4ee73fcf841ab21630a0a61078"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","pid":1838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186/rootfs","created":"2023-11-08T23:43:41.837718349Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_01aed977-1439-433c-b8b1-869c92
fcd9e2","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"01aed977-1439-433c-b8b1-869c92fcd9e2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086","pid":1198,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086/rootfs","created":"2023-11-08T23:43:19.509573182Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.3","io.kubernetes.cri.sandbox-id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-40
0359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"782fbbe1f7d627cd92711fb14a0b0813"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2","pid":1272,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2/rootfs","created":"2023-11-08T23:43:19.928879069Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.3","io.kubernetes.cri.sandbox-id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.
cri.sandbox-uid":"926dd51d8b9a510a42b3d2d730469c12"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","pid":1169,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef/rootfs","created":"2023-11-08T23:43:18.854841205Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-400359_926dd51d8b9a510a42b3d2d730469c12","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-con
troller-manager-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"926dd51d8b9a510a42b3d2d730469c12"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9","pid":1308,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9/rootfs","created":"2023-11-08T23:43:20.119265886Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.3","io.kubernetes.cri.sandbox-id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernete
s.cri.sandbox-uid":"af28ec4ee73fcf841ab21630a0a61078"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f","pid":1923,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f/rootfs","created":"2023-11-08T23:43:43.423326377Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"01aed977-1439-433c-b8b1-869c92fcd9e2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id
":"e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5","pid":1870,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5/rootfs","created":"2023-11-08T23:43:42.0245694Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-tqvtr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b03be54f-57e6-4247-84ba-9545f9b1b4ed"},"owner":"root"}]
	I1108 23:44:03.616807  213888 cri.go:126] list returned 14 containers
	I1108 23:44:03.616824  213888 cri.go:129] container: {ID:0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9 Status:running}
	I1108 23:44:03.616850  213888 cri.go:131] skipping 0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9 - not in ps
	I1108 23:44:03.616857  213888 cri.go:129] container: {ID:127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1 Status:running}
	I1108 23:44:03.616865  213888 cri.go:131] skipping 127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1 - not in ps
	I1108 23:44:03.616871  213888 cri.go:129] container: {ID:46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 Status:running}
	I1108 23:44:03.616879  213888 cri.go:135] skipping {46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 running}: state = "running", want "paused"
	I1108 23:44:03.616892  213888 cri.go:129] container: {ID:523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 Status:running}
	I1108 23:44:03.616900  213888 cri.go:131] skipping 523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 - not in ps
	I1108 23:44:03.616906  213888 cri.go:129] container: {ID:8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44 Status:running}
	I1108 23:44:03.616913  213888 cri.go:131] skipping 8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44 - not in ps
	I1108 23:44:03.616919  213888 cri.go:129] container: {ID:998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 Status:running}
	I1108 23:44:03.616927  213888 cri.go:135] skipping {998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 running}: state = "running", want "paused"
	I1108 23:44:03.616934  213888 cri.go:129] container: {ID:9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b Status:running}
	I1108 23:44:03.616941  213888 cri.go:131] skipping 9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b - not in ps
	I1108 23:44:03.616947  213888 cri.go:129] container: {ID:9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186 Status:running}
	I1108 23:44:03.616954  213888 cri.go:131] skipping 9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186 - not in ps
	I1108 23:44:03.616959  213888 cri.go:129] container: {ID:a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086 Status:running}
	I1108 23:44:03.616963  213888 cri.go:135] skipping {a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086 running}: state = "running", want "paused"
	I1108 23:44:03.616967  213888 cri.go:129] container: {ID:b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 Status:running}
	I1108 23:44:03.616973  213888 cri.go:135] skipping {b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 running}: state = "running", want "paused"
	I1108 23:44:03.616980  213888 cri.go:129] container: {ID:ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef Status:running}
	I1108 23:44:03.616988  213888 cri.go:131] skipping ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef - not in ps
	I1108 23:44:03.616993  213888 cri.go:129] container: {ID:daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 Status:running}
	I1108 23:44:03.617001  213888 cri.go:135] skipping {daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 running}: state = "running", want "paused"
	I1108 23:44:03.617019  213888 cri.go:129] container: {ID:db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f Status:running}
	I1108 23:44:03.617027  213888 cri.go:135] skipping {db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f running}: state = "running", want "paused"
	I1108 23:44:03.617034  213888 cri.go:129] container: {ID:e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 Status:running}
	I1108 23:44:03.617041  213888 cri.go:135] skipping {e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 running}: state = "running", want "paused"
	I1108 23:44:03.617112  213888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 23:44:03.629140  213888 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 23:44:03.629156  213888 kubeadm.go:636] restartCluster start
	I1108 23:44:03.629300  213888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 23:44:03.640035  213888 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:03.640634  213888 kubeconfig.go:92] found "functional-400359" server: "https://192.168.39.189:8441"
	I1108 23:44:03.641989  213888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 23:44:03.652731  213888 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1108 23:44:03.652746  213888 kubeadm.go:1128] stopping kube-system containers ...
	I1108 23:44:03.652762  213888 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1108 23:44:03.652812  213888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 23:44:03.699235  213888 cri.go:89] found id: "db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f"
	I1108 23:44:03.699249  213888 cri.go:89] found id: "e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5"
	I1108 23:44:03.699251  213888 cri.go:89] found id: "998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1"
	I1108 23:44:03.699255  213888 cri.go:89] found id: "daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9"
	I1108 23:44:03.699260  213888 cri.go:89] found id: "b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2"
	I1108 23:44:03.699263  213888 cri.go:89] found id: "46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573"
	I1108 23:44:03.699265  213888 cri.go:89] found id: "a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086"
	I1108 23:44:03.699268  213888 cri.go:89] found id: ""
	I1108 23:44:03.699272  213888 cri.go:234] Stopping containers: [db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086]
	I1108 23:44:03.699323  213888 ssh_runner.go:195] Run: which crictl
	I1108 23:44:03.703856  213888 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086
	I1108 23:44:19.459008  213888 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086: (15.75506263s)
	I1108 23:44:19.459080  213888 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 23:44:19.504154  213888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 23:44:19.515266  213888 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  8 23:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Nov  8 23:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Nov  8 23:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Nov  8 23:43 /etc/kubernetes/scheduler.conf
	
	I1108 23:44:19.515346  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1108 23:44:19.524771  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1108 23:44:19.534582  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1108 23:44:19.544348  213888 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:19.544402  213888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 23:44:19.553487  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1108 23:44:19.562898  213888 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:19.562943  213888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 23:44:19.572855  213888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 23:44:19.583092  213888 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 23:44:19.583112  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:19.656656  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:20.718251  213888 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.061543708s)
	I1108 23:44:20.718274  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:20.940824  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:21.049550  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:21.155180  213888 api_server.go:52] waiting for apiserver process to appear ...
	I1108 23:44:21.155262  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:21.170827  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:21.687533  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:22.187100  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:22.201909  213888 api_server.go:72] duration metric: took 1.046727455s to wait for apiserver process to appear ...
	I1108 23:44:22.201930  213888 api_server.go:88] waiting for apiserver healthz status ...
	I1108 23:44:22.201951  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:22.202592  213888 api_server.go:269] stopped: https://192.168.39.189:8441/healthz: Get "https://192.168.39.189:8441/healthz": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:22.202621  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:22.203025  213888 api_server.go:269] stopped: https://192.168.39.189:8441/healthz: Get "https://192.168.39.189:8441/healthz": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:22.703898  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.321821  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 23:44:24.321848  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 23:44:24.321866  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.331452  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 23:44:24.331472  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 23:44:24.703560  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.710858  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 23:44:24.710888  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 23:44:25.203966  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:25.210943  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 23:44:25.210976  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 23:44:25.703512  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:25.709194  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 200:
	ok
	I1108 23:44:25.717645  213888 api_server.go:141] control plane version: v1.28.3
	I1108 23:44:25.717670  213888 api_server.go:131] duration metric: took 3.515732599s to wait for apiserver health ...
	I1108 23:44:25.717682  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:44:25.717690  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:44:25.719887  213888 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 23:44:25.721531  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 23:44:25.734492  213888 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 23:44:25.771439  213888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 23:44:25.784433  213888 system_pods.go:59] 7 kube-system pods found
	I1108 23:44:25.784465  213888 system_pods.go:61] "coredns-5dd5756b68-tqvtr" [b03be54f-57e6-4247-84ba-9545f9b1b4ed] Running
	I1108 23:44:25.784475  213888 system_pods.go:61] "etcd-functional-400359" [70bdf2a8-b999-4d46-baf3-0c9267d9d3ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 23:44:25.784489  213888 system_pods.go:61] "kube-apiserver-functional-400359" [9b2db385-150c-4599-b59e-165208edd076] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 23:44:25.784498  213888 system_pods.go:61] "kube-controller-manager-functional-400359" [e2f2bb0b-f018-4ada-bd5d-d225b097763b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 23:44:25.784504  213888 system_pods.go:61] "kube-proxy-wv6f7" [7ab3ac5b-5a0e-462b-a171-08f507184dfa] Running
	I1108 23:44:25.784511  213888 system_pods.go:61] "kube-scheduler-functional-400359" [0156fad8-02e5-40ae-a5d1-17824d5c238b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 23:44:25.784521  213888 system_pods.go:61] "storage-provisioner" [01aed977-1439-433c-b8b1-869c92fcd9e2] Running
	I1108 23:44:25.784531  213888 system_pods.go:74] duration metric: took 13.073006ms to wait for pod list to return data ...
	I1108 23:44:25.784539  213888 node_conditions.go:102] verifying NodePressure condition ...
	I1108 23:44:25.793569  213888 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 23:44:25.793597  213888 node_conditions.go:123] node cpu capacity is 2
	I1108 23:44:25.793611  213888 node_conditions.go:105] duration metric: took 9.06541ms to run NodePressure ...
	I1108 23:44:25.793633  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:26.114141  213888 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 23:44:26.120712  213888 kubeadm.go:787] kubelet initialised
	I1108 23:44:26.120723  213888 kubeadm.go:788] duration metric: took 6.565858ms waiting for restarted kubelet to initialise ...
	I1108 23:44:26.120731  213888 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:44:26.131331  213888 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:26.138144  213888 pod_ready.go:92] pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:26.138155  213888 pod_ready.go:81] duration metric: took 6.806304ms waiting for pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:26.138164  213888 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:28.164811  213888 pod_ready.go:102] pod "etcd-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:30.665514  213888 pod_ready.go:92] pod "etcd-functional-400359" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:30.665553  213888 pod_ready.go:81] duration metric: took 4.527359591s waiting for pod "etcd-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:30.665565  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:32.689403  213888 pod_ready.go:102] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:34.690254  213888 pod_ready.go:102] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:35.686775  213888 pod_ready.go:92] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:35.686791  213888 pod_ready.go:81] duration metric: took 5.021218707s waiting for pod "kube-apiserver-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:35.686800  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:37.708359  213888 pod_ready.go:102] pod "kube-controller-manager-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:40.208162  213888 pod_ready.go:102] pod "kube-controller-manager-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:41.201149  213888 pod_ready.go:97] error getting pod "kube-controller-manager-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201165  213888 pod_ready.go:81] duration metric: took 5.514358749s waiting for pod "kube-controller-manager-functional-400359" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.201176  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201204  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wv6f7" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:41.201819  213888 pod_ready.go:97] error getting pod "kube-proxy-wv6f7" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wv6f7": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201831  213888 pod_ready.go:81] duration metric: took 621.035µs waiting for pod "kube-proxy-wv6f7" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.201841  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-wv6f7" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wv6f7": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201857  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:41.202340  213888 pod_ready.go:97] error getting pod "kube-scheduler-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.202352  213888 pod_ready.go:81] duration metric: took 489.317µs waiting for pod "kube-scheduler-functional-400359" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.202362  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.202373  213888 pod_ready.go:38] duration metric: took 15.08163132s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:44:41.202390  213888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 23:44:41.213978  213888 ops.go:34] apiserver oom_adj: -16
	I1108 23:44:41.213994  213888 kubeadm.go:640] restartCluster took 37.584832416s
	I1108 23:44:41.214002  213888 kubeadm.go:406] StartCluster complete in 37.679936432s
	I1108 23:44:41.214034  213888 settings.go:142] acquiring lock: {Name:mkb2acb83ccee48e6a009b8a47bf5424e6c38acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:41.214142  213888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:44:41.215036  213888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-201782/kubeconfig: {Name:mk9c6e9f67ac12aac98932c0b45c3a0608805854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:41.215314  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 23:44:41.215404  213888 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 23:44:41.215479  213888 addons.go:69] Setting storage-provisioner=true in profile "functional-400359"
	I1108 23:44:41.215505  213888 addons.go:69] Setting default-storageclass=true in profile "functional-400359"
	I1108 23:44:41.215525  213888 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-400359"
	I1108 23:44:41.215526  213888 addons.go:231] Setting addon storage-provisioner=true in "functional-400359"
	W1108 23:44:41.215533  213888 addons.go:240] addon storage-provisioner should already be in state true
	I1108 23:44:41.215537  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:44:41.215605  213888 host.go:66] Checking if "functional-400359" exists ...
	I1108 23:44:41.215913  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.215951  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.216018  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.216055  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1108 23:44:41.216959  213888 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-400359" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.39.189:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:41.216977  213888 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.39.189:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.217012  213888 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1108 23:44:41.220368  213888 out.go:177] * Verifying Kubernetes components...
	I1108 23:44:41.222004  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:44:41.231875  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I1108 23:44:41.232530  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.233190  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.233218  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.233719  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.234280  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.234325  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.237697  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I1108 23:44:41.238255  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.238752  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.238768  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.239192  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.239445  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.244598  213888 addons.go:231] Setting addon default-storageclass=true in "functional-400359"
	W1108 23:44:41.244614  213888 addons.go:240] addon default-storageclass should already be in state true
	I1108 23:44:41.244642  213888 host.go:66] Checking if "functional-400359" exists ...
	I1108 23:44:41.245132  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.245164  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.252037  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I1108 23:44:41.252498  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.253020  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.253051  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.253456  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.253670  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.255485  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:41.257960  213888 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 23:44:41.259863  213888 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 23:44:41.259875  213888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 23:44:41.259896  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:41.261665  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I1108 23:44:41.262263  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.262840  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.262867  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.263263  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.263662  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.263878  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.263916  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.264121  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:41.264156  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.264394  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:41.264629  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:41.264831  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:41.265036  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:41.280509  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I1108 23:44:41.281054  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.281632  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.281643  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.282046  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.282278  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.284072  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:41.284406  213888 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 23:44:41.284420  213888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 23:44:41.284442  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:41.287607  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.288057  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:41.288091  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.288286  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:41.288503  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:41.288686  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:41.288836  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:41.340989  213888 node_ready.go:35] waiting up to 6m0s for node "functional-400359" to be "Ready" ...
	E1108 23:44:41.341045  213888 start.go:891] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1108 23:44:41.341073  213888 start.go:294] Unable to inject {"host.minikube.internal": 192.168.39.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1108 23:44:41.341104  213888 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I1108 23:44:41.341639  213888 node_ready.go:53] error getting node "functional-400359": Get "https://192.168.39.189:8441/api/v1/nodes/functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.341651  213888 node_ready.go:38] duration metric: took 637.211µs waiting for node "functional-400359" to be "Ready" ...
	I1108 23:44:41.344408  213888 out.go:177] 
	W1108 23:44:41.345988  213888 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-400359": Get "https://192.168.39.189:8441/api/v1/nodes/functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:41.346006  213888 out.go:239] * 
	W1108 23:44:41.346885  213888 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 23:44:41.349263  213888 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	76666ef471448       6e38f40d628db       3 seconds ago        Exited              storage-provisioner       2                   9c7477be15957       storage-provisioner
	dc58c905bfcc3       5374347291230       34 seconds ago       Running             kube-apiserver            2                   523d23a3366a5       kube-apiserver-functional-400359
	7921f51c4026f       10baa1ca17068       34 seconds ago       Running             kube-controller-manager   2                   ca712d9c0441a       kube-controller-manager-functional-400359
	bff1a67a2e4bc       5374347291230       36 seconds ago       Created             kube-apiserver            1                   523d23a3366a5       kube-apiserver-functional-400359
	88c140ed6030d       ead0a4a53df89       46 seconds ago       Running             coredns                   1                   8005a17990fd0       coredns-5dd5756b68-tqvtr
	fb3df666c8263       bfc896cf80fba       46 seconds ago       Running             kube-proxy                1                   0d0883976452b       kube-proxy-wv6f7
	1d784d6322fa7       73deb9a3f7025       46 seconds ago       Running             etcd                      1                   1274367410852       etcd-functional-400359
	2faf0584a90c9       10baa1ca17068       46 seconds ago       Exited              kube-controller-manager   1                   ca712d9c0441a       kube-controller-manager-functional-400359
	a06cdad021ec7       6d1b4fd1b182d       46 seconds ago       Running             kube-scheduler            1                   9bb1405590c60       kube-scheduler-functional-400359
	e502430453488       ead0a4a53df89       About a minute ago   Exited              coredns                   0                   8005a17990fd0       coredns-5dd5756b68-tqvtr
	998ca340aa83f       bfc896cf80fba       About a minute ago   Exited              kube-proxy                0                   0d0883976452b       kube-proxy-wv6f7
	daf40bd6e2a8e       6d1b4fd1b182d       About a minute ago   Exited              kube-scheduler            0                   9bb1405590c60       kube-scheduler-functional-400359
	46b02dbdf3f22       73deb9a3f7025       About a minute ago   Exited              etcd                      0                   1274367410852       etcd-functional-400359
	
	* 
	* ==> containerd <==
	* -- Journal begins at Wed 2023-11-08 23:42:35 UTC, ends at Wed 2023-11-08 23:44:55 UTC. --
	Nov 08 23:44:21 functional-400359 containerd[2683]: time="2023-11-08T23:44:21.685611342Z" level=info msg="CreateContainer within sandbox \"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:2,}"
	Nov 08 23:44:21 functional-400359 containerd[2683]: time="2023-11-08T23:44:21.687589147Z" level=info msg="CreateContainer within sandbox \"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}"
	Nov 08 23:44:21 functional-400359 containerd[2683]: time="2023-11-08T23:44:21.729602159Z" level=info msg="CreateContainer within sandbox \"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"7921f51c4026fd4eadeac9dbccfa803fc415bc1ed99e900bd95f598a614d8315\""
	Nov 08 23:44:21 functional-400359 containerd[2683]: time="2023-11-08T23:44:21.734336501Z" level=info msg="StartContainer for \"7921f51c4026fd4eadeac9dbccfa803fc415bc1ed99e900bd95f598a614d8315\""
	Nov 08 23:44:21 functional-400359 containerd[2683]: time="2023-11-08T23:44:21.737427063Z" level=info msg="CreateContainer within sandbox \"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:2,} returns container id \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\""
	Nov 08 23:44:21 functional-400359 containerd[2683]: time="2023-11-08T23:44:21.738551961Z" level=info msg="StartContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\""
	Nov 08 23:44:22 functional-400359 containerd[2683]: time="2023-11-08T23:44:22.211565896Z" level=info msg="StartContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" returns successfully"
	Nov 08 23:44:22 functional-400359 containerd[2683]: time="2023-11-08T23:44:22.232714750Z" level=info msg="StartContainer for \"7921f51c4026fd4eadeac9dbccfa803fc415bc1ed99e900bd95f598a614d8315\" returns successfully"
	Nov 08 23:44:24 functional-400359 containerd[2683]: time="2023-11-08T23:44:24.480975904Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Nov 08 23:44:41 functional-400359 containerd[2683]: time="2023-11-08T23:44:41.034766097Z" level=info msg="StopContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" with timeout 30 (s)"
	Nov 08 23:44:41 functional-400359 containerd[2683]: time="2023-11-08T23:44:41.036167239Z" level=info msg="Stop container \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" with signal terminated"
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.190767681Z" level=info msg="shim disconnected" id=17414ed203c9669a58319f600c2c2c1debce57973915376012481b0781813b4d namespace=k8s.io
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.190888023Z" level=warning msg="cleaning up after shim disconnected" id=17414ed203c9669a58319f600c2c2c1debce57973915376012481b0781813b4d namespace=k8s.io
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.190901119Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.296538210Z" level=info msg="RemoveContainer for \"db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f\""
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.298300826Z" level=info msg="CreateContainer within sandbox \"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.304289445Z" level=info msg="RemoveContainer for \"db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f\" returns successfully"
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.334877833Z" level=info msg="CreateContainer within sandbox \"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"76666ef4714482e565dfebdae2cfc50cdff1ac24e59143795efb2b5476b80602\""
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.336048798Z" level=info msg="StartContainer for \"76666ef4714482e565dfebdae2cfc50cdff1ac24e59143795efb2b5476b80602\""
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.429331672Z" level=info msg="StartContainer for \"76666ef4714482e565dfebdae2cfc50cdff1ac24e59143795efb2b5476b80602\" returns successfully"
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.465276820Z" level=info msg="shim disconnected" id=76666ef4714482e565dfebdae2cfc50cdff1ac24e59143795efb2b5476b80602 namespace=k8s.io
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.465518500Z" level=warning msg="cleaning up after shim disconnected" id=76666ef4714482e565dfebdae2cfc50cdff1ac24e59143795efb2b5476b80602 namespace=k8s.io
	Nov 08 23:44:52 functional-400359 containerd[2683]: time="2023-11-08T23:44:52.465575495Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 08 23:44:53 functional-400359 containerd[2683]: time="2023-11-08T23:44:53.306433298Z" level=info msg="RemoveContainer for \"17414ed203c9669a58319f600c2c2c1debce57973915376012481b0781813b4d\""
	Nov 08 23:44:53 functional-400359 containerd[2683]: time="2023-11-08T23:44:53.313688261Z" level=info msg="RemoveContainer for \"17414ed203c9669a58319f600c2c2c1debce57973915376012481b0781813b4d\" returns successfully"
	
	* 
	* ==> coredns [88c140ed6030d22284aaafb49382d15ef7da52d8beb9e058c36ea698c2910d04] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57342 - 44358 "HINFO IN 4361793349757605016.248109365602167116. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.135909373s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: unknown (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: unknown (get namespaces)
	
	* 
	* ==> coredns [e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51534 - 35900 "HINFO IN 2585345581505525764.4555830120890176857. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031001187s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.156846] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.062315] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.304325] systemd-fstab-generator[561]: Ignoring "noauto" for root device
	[  +0.112180] systemd-fstab-generator[572]: Ignoring "noauto" for root device
	[  +0.151842] systemd-fstab-generator[585]: Ignoring "noauto" for root device
	[  +0.124353] systemd-fstab-generator[596]: Ignoring "noauto" for root device
	[  +0.268439] systemd-fstab-generator[623]: Ignoring "noauto" for root device
	[  +6.156386] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[Nov 8 23:43] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +9.282190] systemd-fstab-generator[1362]: Ignoring "noauto" for root device
	[ +18.264010] systemd-fstab-generator[2015]: Ignoring "noauto" for root device
	[  +0.177052] systemd-fstab-generator[2026]: Ignoring "noauto" for root device
	[  +0.171180] systemd-fstab-generator[2039]: Ignoring "noauto" for root device
	[  +0.169893] systemd-fstab-generator[2050]: Ignoring "noauto" for root device
	[  +0.296549] systemd-fstab-generator[2076]: Ignoring "noauto" for root device
	[Nov 8 23:44] systemd-fstab-generator[2615]: Ignoring "noauto" for root device
	[  +0.147087] systemd-fstab-generator[2626]: Ignoring "noauto" for root device
	[  +0.171247] systemd-fstab-generator[2639]: Ignoring "noauto" for root device
	[  +0.165487] systemd-fstab-generator[2650]: Ignoring "noauto" for root device
	[  +0.295897] systemd-fstab-generator[2676]: Ignoring "noauto" for root device
	[ +19.128891] systemd-fstab-generator[3485]: Ignoring "noauto" for root device
	[ +15.032820] kauditd_printk_skb: 23 callbacks suppressed
	
	* 
	* ==> etcd [1d784d6322fa72bf1ea8c9873171f75a644fcdac3d60a60b7253cea2aad58484] <==
	* {"level":"info","ts":"2023-11-08T23:44:10.907861Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-08T23:44:10.907973Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-11-08T23:44:10.908286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a switched to configuration voters=(8048648980531676538)"}
	{"level":"info","ts":"2023-11-08T23:44:10.908344Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","added-peer-id":"6fb28b9aae66857a","added-peer-peer-urls":["https://192.168.39.189:2380"]}
	{"level":"info","ts":"2023-11-08T23:44:10.908546Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:44:10.908577Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:44:10.919242Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:10.919299Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:10.919177Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-08T23:44:10.920701Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-08T23:44:10.920863Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6fb28b9aae66857a","initial-advertise-peer-urls":["https://192.168.39.189:2380"],"listen-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.189:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-08T23:44:12.571328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a received MsgPreVoteResp from 6fb28b9aae66857a at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became candidate at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.571747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a received MsgVoteResp from 6fb28b9aae66857a at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.571885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became leader at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.572003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6fb28b9aae66857a elected leader 6fb28b9aae66857a at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.574123Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6fb28b9aae66857a","local-member-attributes":"{Name:functional-400359 ClientURLs:[https://192.168.39.189:2379]}","request-path":"/0/members/6fb28b9aae66857a/attributes","cluster-id":"f0bdb053fd9e03ec","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-08T23:44:12.574193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:44:12.575568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T23:44:12.575581Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T23:44:12.57599Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T23:44:12.576127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:44:12.580777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.189:2379"}
	
	* 
	* ==> etcd [46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573] <==
	* {"level":"info","ts":"2023-11-08T23:43:21.2639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:43:21.265203Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.189:2379"}
	{"level":"info","ts":"2023-11-08T23:43:21.264037Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.264104Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:43:21.268038Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T23:43:21.273657Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T23:43:21.27674Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T23:43:21.306896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.332049Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.332311Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:43.658034Z","caller":"traceutil/trace.go:171","msg":"trace[1655151050] linearizableReadLoop","detail":"{readStateIndex:436; appliedIndex:435; }","duration":"158.288056ms","start":"2023-11-08T23:43:43.499691Z","end":"2023-11-08T23:43:43.657979Z","steps":["trace[1655151050] 'read index received'  (duration: 158.050466ms)","trace[1655151050] 'applied index is now lower than readState.Index'  (duration: 237.256µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-08T23:43:43.658216Z","caller":"traceutil/trace.go:171","msg":"trace[1004018470] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"165.867105ms","start":"2023-11-08T23:43:43.492343Z","end":"2023-11-08T23:43:43.65821Z","steps":["trace[1004018470] 'process raft request'  (duration: 165.460392ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-08T23:43:43.659133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.382515ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2023-11-08T23:43:43.659215Z","caller":"traceutil/trace.go:171","msg":"trace[1204654578] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:419; }","duration":"159.531169ms","start":"2023-11-08T23:43:43.499663Z","end":"2023-11-08T23:43:43.659194Z","steps":["trace[1204654578] 'agreement among raft nodes before linearized reading'  (duration: 158.722284ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T23:43:49.836017Z","caller":"traceutil/trace.go:171","msg":"trace[1640228342] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"142.995238ms","start":"2023-11-08T23:43:49.693Z","end":"2023-11-08T23:43:49.835995Z","steps":["trace[1640228342] 'process raft request'  (duration: 142.737466ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T23:44:09.257705Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-11-08T23:44:09.257894Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-400359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	{"level":"warn","ts":"2023-11-08T23:44:09.258128Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.258264Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.273807Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.274055Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"info","ts":"2023-11-08T23:44:09.274266Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6fb28b9aae66857a","current-leader-member-id":"6fb28b9aae66857a"}
	{"level":"info","ts":"2023-11-08T23:44:09.277371Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:09.277689Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:09.277704Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-400359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	
	* 
	* ==> kernel <==
	*  23:44:56 up 2 min,  0 users,  load average: 1.52, 0.75, 0.29
	Linux functional-400359 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0] <==
	* 
	* ==> kube-apiserver [dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b] <==
	* I1108 23:44:41.061968       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1108 23:44:41.061974       1 available_controller.go:439] Shutting down AvailableConditionController
	I1108 23:44:41.061988       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I1108 23:44:41.062077       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I1108 23:44:41.062210       1 controller.go:129] Ending legacy_token_tracking_controller
	I1108 23:44:41.062216       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I1108 23:44:41.061905       1 controller.go:115] Shutting down OpenAPI V3 controller
	I1108 23:44:41.065134       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 23:44:41.065161       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1108 23:44:41.068603       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1108 23:44:41.068954       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1108 23:44:41.069001       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I1108 23:44:41.069017       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I1108 23:44:41.069309       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1108 23:44:41.069533       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1108 23:44:41.068873       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1108 23:44:41.072157       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I1108 23:44:41.072289       1 secure_serving.go:258] Stopped listening on [::]:8441
	I1108 23:44:41.074305       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1108 23:44:41.069310       1 controller.go:159] Shutting down quota evaluator
	I1108 23:44:41.074356       1 controller.go:178] quota evaluator worker shutdown
	I1108 23:44:41.074367       1 controller.go:178] quota evaluator worker shutdown
	I1108 23:44:41.074371       1 controller.go:178] quota evaluator worker shutdown
	I1108 23:44:41.074377       1 controller.go:178] quota evaluator worker shutdown
	I1108 23:44:41.074381       1 controller.go:178] quota evaluator worker shutdown
	
	* 
	* ==> kube-controller-manager [2faf0584a90c98fa3ae503339949f6fdc901e881c318c3b0b4ca3323123ba1a0] <==
	* I1108 23:44:10.838065       1 serving.go:348] Generated self-signed cert in-memory
	I1108 23:44:11.452649       1 controllermanager.go:189] "Starting" version="v1.28.3"
	I1108 23:44:11.452696       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 23:44:11.454751       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1108 23:44:11.455029       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 23:44:11.455309       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 23:44:11.455704       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1108 23:44:11.475414       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I1108 23:44:11.576258       1 shared_informer.go:318] Caches are synced for tokens
	I1108 23:44:12.801347       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1108 23:44:12.802296       1 cleaner.go:83] "Starting CSR cleaner controller"
	I1108 23:44:12.899559       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I1108 23:44:12.899798       1 namespace_controller.go:197] "Starting namespace controller"
	I1108 23:44:12.900091       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I1108 23:44:12.926665       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I1108 23:44:12.927319       1 stateful_set.go:161] "Starting stateful set controller"
	I1108 23:44:12.927524       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I1108 23:44:12.935324       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1108 23:44:12.935710       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I1108 23:44:12.936165       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	F1108 23:44:12.956649       1 client_builder_dynamic.go:174] Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.39.189:8441: connect: connection refused
	
	* 
	* ==> kube-controller-manager [7921f51c4026fd4eadeac9dbccfa803fc415bc1ed99e900bd95f598a614d8315] <==
	* I1108 23:44:36.888207       1 shared_informer.go:318] Caches are synced for deployment
	I1108 23:44:36.914587       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 23:44:36.950272       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"functional-400359\" does not exist"
	I1108 23:44:36.959129       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 23:44:36.963962       1 shared_informer.go:318] Caches are synced for GC
	I1108 23:44:36.986020       1 shared_informer.go:318] Caches are synced for daemon sets
	I1108 23:44:36.993585       1 shared_informer.go:318] Caches are synced for node
	I1108 23:44:36.993740       1 range_allocator.go:174] "Sending events to api server"
	I1108 23:44:36.993795       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1108 23:44:36.993812       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1108 23:44:36.993931       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1108 23:44:36.998585       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1108 23:44:37.008283       1 shared_informer.go:318] Caches are synced for attach detach
	I1108 23:44:37.022208       1 shared_informer.go:318] Caches are synced for taint
	I1108 23:44:37.022353       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1108 23:44:37.022927       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-400359"
	I1108 23:44:37.023049       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1108 23:44:37.023069       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1108 23:44:37.023085       1 taint_manager.go:211] "Sending events to api server"
	I1108 23:44:37.024141       1 event.go:307] "Event occurred" object="functional-400359" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-400359 event: Registered Node functional-400359 in Controller"
	I1108 23:44:37.024519       1 shared_informer.go:318] Caches are synced for persistent volume
	I1108 23:44:37.048147       1 shared_informer.go:318] Caches are synced for TTL
	I1108 23:44:37.409049       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 23:44:37.413937       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 23:44:37.414048       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1] <==
	* I1108 23:43:40.754980       1 server_others.go:69] "Using iptables proxy"
	I1108 23:43:40.769210       1 node.go:141] Successfully retrieved node IP: 192.168.39.189
	I1108 23:43:40.838060       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 23:43:40.838106       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 23:43:40.841931       1 server_others.go:152] "Using iptables Proxier"
	I1108 23:43:40.842026       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 23:43:40.842300       1 server.go:846] "Version info" version="v1.28.3"
	I1108 23:43:40.842337       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 23:43:40.843102       1 config.go:188] "Starting service config controller"
	I1108 23:43:40.843156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 23:43:40.843175       1 config.go:97] "Starting endpoint slice config controller"
	I1108 23:43:40.843178       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 23:43:40.843838       1 config.go:315] "Starting node config controller"
	I1108 23:43:40.843878       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 23:43:40.943579       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 23:43:40.943667       1 shared_informer.go:318] Caches are synced for service config
	I1108 23:43:40.943937       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [fb3df666c8263c19fd9a028191dcb6e116547d67a9bf7f535ab103998f60679d] <==
	* I1108 23:44:13.012381       1 shared_informer.go:311] Waiting for caches to sync for node config
	W1108 23:44:13.012621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.012810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.013169       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.189:8441: connect: connection refused' (may retry after sleeping)
	W1108 23:44:13.815291       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.815363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:13.950038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.950102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:14.326340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:14.326643       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:15.820268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:15.820340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:16.787304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:16.787347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:17.093198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:17.093270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:19.899967       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:19.900010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:20.381161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:20.381245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:24.387034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	E1108 23:44:24.387290       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	I1108 23:44:29.107551       1 shared_informer.go:318] Caches are synced for service config
	I1108 23:44:29.513134       1 shared_informer.go:318] Caches are synced for node config
	I1108 23:44:35.808555       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a06cdad021ec7e1e28779a525beede6288ae5f847a64e005969e95c7cf80f00a] <==
	* I1108 23:44:12.860548       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1108 23:44:12.860643       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1108 23:44:12.860727       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 23:44:12.864294       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 23:44:12.864532       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 23:44:12.864566       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 23:44:12.864879       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 23:44:12.961705       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1108 23:44:12.965186       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 23:44:12.965350       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1108 23:44:24.314857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E1108 23:44:24.314957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E1108 23:44:24.319832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E1108 23:44:24.320160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods)
	E1108 23:44:24.320904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E1108 23:44:24.321298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services)
	E1108 23:44:24.321419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E1108 23:44:24.322244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E1108 23:44:24.322300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces)
	E1108 23:44:24.322320       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E1108 23:44:24.324606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E1108 23:44:24.328639       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes)
	E1108 23:44:24.328706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E1108 23:44:24.328951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E1108 23:44:24.401809       1 reflector.go:147] pkg/authentication/request/headerrequest/requestheader_controller.go:172: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> kube-scheduler [daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9] <==
	* E1108 23:43:23.555057       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 23:43:23.555310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 23:43:23.555637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 23:43:24.357554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 23:43:24.357652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 23:43:24.363070       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 23:43:24.363147       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 23:43:24.439814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 23:43:24.439863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 23:43:24.511419       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 23:43:24.511725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 23:43:24.521064       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1108 23:43:24.521357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1108 23:43:24.636054       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 23:43:24.636113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1108 23:43:24.742651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 23:43:24.742701       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 23:43:24.766583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 23:43:24.766665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 23:43:24.821852       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 23:43:24.821977       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1108 23:43:26.911793       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 23:44:09.072908       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1108 23:44:09.073170       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1108 23:44:09.073383       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 23:42:35 UTC, ends at Wed 2023-11-08 23:44:56 UTC. --
	Nov 08 23:44:44 functional-400359 kubelet[3491]: E1108 23:44:44.966214    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:44 functional-400359 kubelet[3491]: E1108 23:44:44.966787    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:44 functional-400359 kubelet[3491]: E1108 23:44:44.967165    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:44 functional-400359 kubelet[3491]: E1108 23:44:44.967240    3491 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Nov 08 23:44:45 functional-400359 kubelet[3491]: E1108 23:44:45.174051    3491 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused" interval="800ms"
	Nov 08 23:44:45 functional-400359 kubelet[3491]: E1108 23:44:45.975128    3491 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused" interval="1.6s"
	Nov 08 23:44:47 functional-400359 kubelet[3491]: E1108 23:44:47.577388    3491 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused" interval="3.2s"
	Nov 08 23:44:50 functional-400359 kubelet[3491]: E1108 23:44:50.778385    3491 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused" interval="6.4s"
	Nov 08 23:44:51 functional-400359 kubelet[3491]: I1108 23:44:51.071803    3491 status_manager.go:853] "Failed to get status for pod" podUID="926dd51d8b9a510a42b3d2d730469c12" pod="kube-system/kube-controller-manager-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:52 functional-400359 kubelet[3491]: I1108 23:44:52.293004    3491 scope.go:117] "RemoveContainer" containerID="db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f"
	Nov 08 23:44:52 functional-400359 kubelet[3491]: I1108 23:44:52.293841    3491 scope.go:117] "RemoveContainer" containerID="17414ed203c9669a58319f600c2c2c1debce57973915376012481b0781813b4d"
	Nov 08 23:44:52 functional-400359 kubelet[3491]: I1108 23:44:52.294259    3491 status_manager.go:853] "Failed to get status for pod" podUID="926dd51d8b9a510a42b3d2d730469c12" pod="kube-system/kube-controller-manager-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:52 functional-400359 kubelet[3491]: I1108 23:44:52.294798    3491 status_manager.go:853] "Failed to get status for pod" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:52 functional-400359 kubelet[3491]: E1108 23:44:52.296430    3491 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"storage-provisioner.1795ca8194a7071e", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"storage-provisioner", UID:"01aed977-1439-433c-b8b1-869c92fcd9e2", APIVersion:"v1", ResourceVersion:"444", FieldPath:"spec.containers{storage-provisioner}"}, Reason:"Pulled", Message:"Container image \"gcr.io/k8s-minikube/storage-provisioner:v5\" already present on machine", Sourc
e:v1.EventSource{Component:"kubelet", Host:"functional-400359"}, FirstTimestamp:time.Date(2023, time.November, 8, 23, 44, 52, 295796510, time.Local), LastTimestamp:time.Date(2023, time.November, 8, 23, 44, 52, 295796510, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-400359"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.39.189:8441: connect: connection refused'(may retry after sleeping)
	Nov 08 23:44:53 functional-400359 kubelet[3491]: I1108 23:44:53.304398    3491 scope.go:117] "RemoveContainer" containerID="17414ed203c9669a58319f600c2c2c1debce57973915376012481b0781813b4d"
	Nov 08 23:44:53 functional-400359 kubelet[3491]: I1108 23:44:53.305811    3491 scope.go:117] "RemoveContainer" containerID="76666ef4714482e565dfebdae2cfc50cdff1ac24e59143795efb2b5476b80602"
	Nov 08 23:44:53 functional-400359 kubelet[3491]: E1108 23:44:53.306253    3491 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(01aed977-1439-433c-b8b1-869c92fcd9e2)\"" pod="kube-system/storage-provisioner" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2"
	Nov 08 23:44:53 functional-400359 kubelet[3491]: I1108 23:44:53.307426    3491 status_manager.go:853] "Failed to get status for pod" podUID="926dd51d8b9a510a42b3d2d730469c12" pod="kube-system/kube-controller-manager-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:53 functional-400359 kubelet[3491]: I1108 23:44:53.309045    3491 status_manager.go:853] "Failed to get status for pod" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:55 functional-400359 kubelet[3491]: E1108 23:44:55.256135    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:55 functional-400359 kubelet[3491]: E1108 23:44:55.256397    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:55 functional-400359 kubelet[3491]: E1108 23:44:55.256670    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:55 functional-400359 kubelet[3491]: E1108 23:44:55.256819    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:55 functional-400359 kubelet[3491]: E1108 23:44:55.257380    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:44:55 functional-400359 kubelet[3491]: E1108 23:44:55.257419    3491 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	
	* 
	* ==> storage-provisioner [76666ef4714482e565dfebdae2cfc50cdff1ac24e59143795efb2b5476b80602] <==
	* I1108 23:44:52.424222       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 23:44:52.426268       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 23:44:56.023249  214124 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	E1108 23:44:56.202502  214124 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-08T23:44:56Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2023-11-08T23:44:56Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: describe nodes, kube-apiserver [bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0]

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-400359 -n functional-400359
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-400359 -n functional-400359: exit status 2 (14.107432093s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-400359" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (71.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-400359 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-400359 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (53.610141ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.189:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-400359 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-400359 -n functional-400359
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-400359 -n functional-400359: exit status 2 (286.914349ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 logs -n 25: (1.444621632s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-764351 --log_dir                                                  | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	|         | /tmp/nospam-764351 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-764351                                                         | nospam-764351     | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:42 UTC |
	| start   | -p functional-400359                                                     | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:42 UTC | 08 Nov 23 23:43 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                                                 |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-400359                                                     | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | minikube-local-cache-test:functional-400359                              |                   |         |         |                     |                     |
	| cache   | functional-400359 cache delete                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | minikube-local-cache-test:functional-400359                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	| ssh     | functional-400359 ssh sudo                                               | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-400359                                                        | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-400359 ssh                                                    | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-400359 cache reload                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	| ssh     | functional-400359 ssh                                                    | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-400359 kubectl --                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | --context functional-400359                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-400359                                                     | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 23:43:59
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 23:43:59.599157  213888 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:43:59.599412  213888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:43:59.599416  213888 out.go:309] Setting ErrFile to fd 2...
	I1108 23:43:59.599420  213888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:43:59.599606  213888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
	I1108 23:43:59.600217  213888 out.go:303] Setting JSON to false
	I1108 23:43:59.601119  213888 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":23194,"bootTime":1699463846,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 23:43:59.601189  213888 start.go:138] virtualization: kvm guest
	I1108 23:43:59.603447  213888 out.go:177] * [functional-400359] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 23:43:59.605356  213888 notify.go:220] Checking for updates...
	I1108 23:43:59.605376  213888 out.go:177]   - MINIKUBE_LOCATION=17586
	I1108 23:43:59.607074  213888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:43:59.608704  213888 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:43:59.610319  213888 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	I1108 23:43:59.611947  213888 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 23:43:59.613523  213888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 23:43:59.615400  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:43:59.615477  213888 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:43:59.615864  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:43:59.615909  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:43:59.631683  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I1108 23:43:59.632150  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:43:59.632691  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:43:59.632708  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:43:59.633075  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:43:59.633250  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.666922  213888 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 23:43:59.668639  213888 start.go:298] selected driver: kvm2
	I1108 23:43:59.668648  213888 start.go:902] validating driver "kvm2" against &{Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400
359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:43:59.668789  213888 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 23:43:59.669167  213888 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:43:59.669241  213888 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17586-201782/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 23:43:59.685241  213888 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 23:43:59.685958  213888 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 23:43:59.686030  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:43:59.686038  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:43:59.686047  213888 start_flags.go:323] config:
	{Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:43:59.686238  213888 iso.go:125] acquiring lock: {Name:mk33479b76ec6919fe69628bcf9e99f9786f49af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:43:59.688123  213888 out.go:177] * Starting control plane node functional-400359 in cluster functional-400359
	I1108 23:43:59.689492  213888 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:43:59.689531  213888 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4
	I1108 23:43:59.689548  213888 cache.go:56] Caching tarball of preloaded images
	I1108 23:43:59.689653  213888 preload.go:174] Found /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1108 23:43:59.689661  213888 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1108 23:43:59.689851  213888 profile.go:148] Saving config to /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/config.json ...
	I1108 23:43:59.690069  213888 start.go:365] acquiring machines lock for functional-400359: {Name:mkc58a906fd9c58de0776efcd0f08335945567ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 23:43:59.690115  213888 start.go:369] acquired machines lock for "functional-400359" in 32.532µs
	I1108 23:43:59.690130  213888 start.go:96] Skipping create...Using existing machine configuration
	I1108 23:43:59.690134  213888 fix.go:54] fixHost starting: 
	I1108 23:43:59.690432  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:43:59.690465  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:43:59.706016  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I1108 23:43:59.706457  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:43:59.706983  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:43:59.707003  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:43:59.707316  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:43:59.707534  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.707715  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:43:59.709629  213888 fix.go:102] recreateIfNeeded on functional-400359: state=Running err=<nil>
	W1108 23:43:59.709665  213888 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 23:43:59.711868  213888 out.go:177] * Updating the running kvm2 "functional-400359" VM ...
	I1108 23:43:59.713307  213888 machine.go:88] provisioning docker machine ...
	I1108 23:43:59.713332  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.713637  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.713880  213888 buildroot.go:166] provisioning hostname "functional-400359"
	I1108 23:43:59.713899  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.714053  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:43:59.716647  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.717013  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:43:59.717073  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.717195  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:43:59.717406  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.717589  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.717824  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:43:59.718013  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:43:59.718360  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:43:59.718370  213888 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-400359 && echo "functional-400359" | sudo tee /etc/hostname
	I1108 23:43:59.863990  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-400359
	
	I1108 23:43:59.864012  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:43:59.866908  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.867252  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:43:59.867363  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.867442  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:43:59.867690  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.867850  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.867996  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:43:59.868145  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:43:59.868492  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:43:59.868503  213888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-400359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-400359/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-400359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 23:43:59.999382  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 23:43:59.999410  213888 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17586-201782/.minikube CaCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17586-201782/.minikube}
	I1108 23:43:59.999434  213888 buildroot.go:174] setting up certificates
	I1108 23:43:59.999445  213888 provision.go:83] configureAuth start
	I1108 23:43:59.999455  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.999781  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:00.002662  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.002978  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.003014  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.003248  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.005651  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.006085  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.006106  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.006287  213888 provision.go:138] copyHostCerts
	I1108 23:44:00.006374  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem, removing ...
	I1108 23:44:00.006389  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem
	I1108 23:44:00.006451  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem (1078 bytes)
	I1108 23:44:00.006581  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem, removing ...
	I1108 23:44:00.006587  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem
	I1108 23:44:00.006617  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem (1123 bytes)
	I1108 23:44:00.006719  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem, removing ...
	I1108 23:44:00.006724  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem
	I1108 23:44:00.006742  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem (1679 bytes)
	I1108 23:44:00.006784  213888 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem org=jenkins.functional-400359 san=[192.168.39.189 192.168.39.189 localhost 127.0.0.1 minikube functional-400359]
	I1108 23:44:00.203873  213888 provision.go:172] copyRemoteCerts
	I1108 23:44:00.203931  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 23:44:00.203956  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.206797  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.207094  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.207119  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.207305  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.207516  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.207692  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.207814  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.301445  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 23:44:00.331684  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 23:44:00.361187  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 23:44:00.388214  213888 provision.go:86] duration metric: configureAuth took 388.751766ms
	I1108 23:44:00.388241  213888 buildroot.go:189] setting minikube options for container-runtime
	I1108 23:44:00.388477  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:44:00.388484  213888 machine.go:91] provisioned docker machine in 675.168638ms
	I1108 23:44:00.388492  213888 start.go:300] post-start starting for "functional-400359" (driver="kvm2")
	I1108 23:44:00.388500  213888 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 23:44:00.388535  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.388924  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 23:44:00.388948  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.391561  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.391940  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.391967  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.392105  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.392316  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.392453  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.392611  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.488199  213888 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 23:44:00.492976  213888 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 23:44:00.492992  213888 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-201782/.minikube/addons for local assets ...
	I1108 23:44:00.493051  213888 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-201782/.minikube/files for local assets ...
	I1108 23:44:00.493113  213888 filesync.go:149] local asset: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem -> 2089632.pem in /etc/ssl/certs
	I1108 23:44:00.493174  213888 filesync.go:149] local asset: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/test/nested/copy/208963/hosts -> hosts in /etc/test/nested/copy/208963
	I1108 23:44:00.493206  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/208963
	I1108 23:44:00.501656  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem --> /etc/ssl/certs/2089632.pem (1708 bytes)
	I1108 23:44:00.525422  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/test/nested/copy/208963/hosts --> /etc/test/nested/copy/208963/hosts (40 bytes)
	I1108 23:44:00.548996  213888 start.go:303] post-start completed in 160.490436ms
	I1108 23:44:00.549028  213888 fix.go:56] fixHost completed within 858.891713ms
	I1108 23:44:00.549103  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.551962  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.552311  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.552329  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.552563  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.552735  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.552911  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.553036  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.553160  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:44:00.553504  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:44:00.553510  213888 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 23:44:00.679007  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699487040.675193612
	
	I1108 23:44:00.679025  213888 fix.go:206] guest clock: 1699487040.675193612
	I1108 23:44:00.679031  213888 fix.go:219] Guest: 2023-11-08 23:44:00.675193612 +0000 UTC Remote: 2023-11-08 23:44:00.549031363 +0000 UTC m=+1.003889169 (delta=126.162249ms)
	I1108 23:44:00.679051  213888 fix.go:190] guest clock delta is within tolerance: 126.162249ms
	I1108 23:44:00.679055  213888 start.go:83] releasing machines lock for "functional-400359", held for 988.934098ms
	I1108 23:44:00.679080  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.679402  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:00.682635  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.683021  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.683048  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.683271  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.683917  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.684098  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.684213  213888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 23:44:00.684252  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.684416  213888 ssh_runner.go:195] Run: cat /version.json
	I1108 23:44:00.684440  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.687054  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687399  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.687426  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687449  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687587  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.687788  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.687907  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.687935  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687948  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.688119  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.688118  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.688285  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.688448  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.688589  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.802586  213888 ssh_runner.go:195] Run: systemctl --version
	I1108 23:44:00.808787  213888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 23:44:00.814779  213888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 23:44:00.814850  213888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 23:44:00.824904  213888 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 23:44:00.824923  213888 start.go:472] detecting cgroup driver to use...
	I1108 23:44:00.824994  213888 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1108 23:44:00.839653  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1108 23:44:00.852631  213888 docker.go:203] disabling cri-docker service (if available) ...
	I1108 23:44:00.852687  213888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 23:44:00.865664  213888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 23:44:00.878442  213888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 23:44:01.013896  213888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 23:44:01.176298  213888 docker.go:219] disabling docker service ...
	I1108 23:44:01.176368  213888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 23:44:01.191617  213888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 23:44:01.205423  213888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 23:44:01.352320  213888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 23:44:01.505796  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 23:44:01.520373  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 23:44:01.539920  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1108 23:44:01.552198  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1108 23:44:01.564553  213888 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1108 23:44:01.564634  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1108 23:44:01.577530  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:44:01.589460  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1108 23:44:01.601621  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:44:01.615054  213888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 23:44:01.626891  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1108 23:44:01.638637  213888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 23:44:01.649235  213888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 23:44:01.660480  213888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 23:44:01.793850  213888 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1108 23:44:01.824923  213888 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1108 23:44:01.824991  213888 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1108 23:44:01.831130  213888 retry.go:31] will retry after 821.206397ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1108 23:44:02.653187  213888 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1108 23:44:02.660143  213888 start.go:540] Will wait 60s for crictl version
	I1108 23:44:02.660193  213888 ssh_runner.go:195] Run: which crictl
	I1108 23:44:02.665280  213888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 23:44:02.711632  213888 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.8
	RuntimeApiVersion:  v1
	I1108 23:44:02.711708  213888 ssh_runner.go:195] Run: containerd --version
	I1108 23:44:02.742401  213888 ssh_runner.go:195] Run: containerd --version
	I1108 23:44:02.772662  213888 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.7.8 ...
	I1108 23:44:02.774143  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:02.776902  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:02.777294  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:02.777321  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:02.777524  213888 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 23:44:02.784598  213888 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1108 23:44:02.786474  213888 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:44:02.786612  213888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:44:02.834765  213888 containerd.go:604] all images are preloaded for containerd runtime.
	I1108 23:44:02.834781  213888 containerd.go:518] Images already preloaded, skipping extraction
	I1108 23:44:02.834839  213888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:44:02.877779  213888 containerd.go:604] all images are preloaded for containerd runtime.
	I1108 23:44:02.877797  213888 cache_images.go:84] Images are preloaded, skipping loading
	I1108 23:44:02.877870  213888 ssh_runner.go:195] Run: sudo crictl info
	I1108 23:44:02.924597  213888 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1108 23:44:02.924626  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:44:02.924635  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:44:02.924644  213888 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 23:44:02.924661  213888 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8441 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-400359 NodeName:functional-400359 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubele
tConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 23:44:02.924813  213888 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-400359"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.189
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 23:44:02.924893  213888 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-400359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1108 23:44:02.924953  213888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 23:44:02.936489  213888 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 23:44:02.936562  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 23:44:02.947183  213888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I1108 23:44:02.966007  213888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 23:44:02.985587  213888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1962 bytes)
	I1108 23:44:03.005107  213888 ssh_runner.go:195] Run: grep 192.168.39.189	control-plane.minikube.internal$ /etc/hosts
	I1108 23:44:03.010099  213888 certs.go:56] Setting up /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359 for IP: 192.168.39.189
	I1108 23:44:03.010128  213888 certs.go:190] acquiring lock for shared ca certs: {Name:mk39cbc6402159d6a738802f6361f72eac5d34d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:03.010382  213888 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17586-201782/.minikube/ca.key
	I1108 23:44:03.010425  213888 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.key
	I1108 23:44:03.010497  213888 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.key
	I1108 23:44:03.010540  213888 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.key.3964182b
	I1108 23:44:03.010588  213888 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.key
	I1108 23:44:03.010739  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963.pem (1338 bytes)
	W1108 23:44:03.010780  213888 certs.go:433] ignoring /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963_empty.pem, impossibly tiny 0 bytes
	I1108 23:44:03.010790  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 23:44:03.010822  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem (1078 bytes)
	I1108 23:44:03.010853  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem (1123 bytes)
	I1108 23:44:03.010885  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem (1679 bytes)
	I1108 23:44:03.010944  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem (1708 bytes)
	I1108 23:44:03.011800  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 23:44:03.052476  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 23:44:03.084167  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 23:44:03.113455  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 23:44:03.138855  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 23:44:03.170000  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 23:44:03.203207  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 23:44:03.233030  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 23:44:03.262431  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963.pem --> /usr/share/ca-certificates/208963.pem (1338 bytes)
	I1108 23:44:03.288670  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem --> /usr/share/ca-certificates/2089632.pem (1708 bytes)
	I1108 23:44:03.317344  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 23:44:03.345150  213888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 23:44:03.367221  213888 ssh_runner.go:195] Run: openssl version
	I1108 23:44:03.373631  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2089632.pem && ln -fs /usr/share/ca-certificates/2089632.pem /etc/ssl/certs/2089632.pem"
	I1108 23:44:03.388662  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.394338  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  8 23:42 /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.394401  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.400580  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2089632.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 23:44:03.412248  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 23:44:03.425515  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.430926  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  8 23:35 /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.430990  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.437443  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 23:44:03.447837  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208963.pem && ln -fs /usr/share/ca-certificates/208963.pem /etc/ssl/certs/208963.pem"
	I1108 23:44:03.461453  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.467398  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  8 23:42 /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.467478  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.474228  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/208963.pem /etc/ssl/certs/51391683.0"
	I1108 23:44:03.487446  213888 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 23:44:03.492652  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 23:44:03.499552  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 23:44:03.507193  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 23:44:03.514236  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 23:44:03.521522  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 23:44:03.527708  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 23:44:03.534082  213888 kubeadm.go:404] StartCluster: {Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:44:03.534196  213888 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1108 23:44:03.534267  213888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 23:44:03.584679  213888 cri.go:89] found id: "db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f"
	I1108 23:44:03.584695  213888 cri.go:89] found id: "e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5"
	I1108 23:44:03.584698  213888 cri.go:89] found id: "998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1"
	I1108 23:44:03.584701  213888 cri.go:89] found id: "daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9"
	I1108 23:44:03.584704  213888 cri.go:89] found id: "b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2"
	I1108 23:44:03.584707  213888 cri.go:89] found id: "46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573"
	I1108 23:44:03.584709  213888 cri.go:89] found id: "a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086"
	I1108 23:44:03.584711  213888 cri.go:89] found id: ""
	I1108 23:44:03.584767  213888 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1108 23:44:03.616378  213888 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","pid":1604,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9/rootfs","created":"2023-11-08T23:43:40.318157335Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wv6f7_7ab3ac5b-5a0e-462b-a171-08f507184dfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wv6f7","io.kubernetes.cri.sand
box-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7ab3ac5b-5a0e-462b-a171-08f507184dfa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","pid":1110,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1/rootfs","created":"2023-11-08T23:43:18.68773069Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-400359_faaa6dec7d9cbf75400a4930b93bdc7d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes
.cri.sandbox-name":"etcd-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"faaa6dec7d9cbf75400a4930b93bdc7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573","pid":1243,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573/rootfs","created":"2023-11-08T23:43:19.79473196Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","io.kubernetes.cri.sandbox-name":"etcd-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fa
aa6dec7d9cbf75400a4930b93bdc7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","pid":1137,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0/rootfs","created":"2023-11-08T23:43:18.759582Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-400359","io.ku
bernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"782fbbe1f7d627cd92711fb14a0b0813"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","pid":1799,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44/rootfs","created":"2023-11-08T23:43:41.584597939Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-tqvtr_b03be54f-57e6-4247-84ba-9545f9b1b4ed","io.kubernetes.cri.sandbox-memory
":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-tqvtr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b03be54f-57e6-4247-84ba-9545f9b1b4ed"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1","pid":1633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1/rootfs","created":"2023-11-08T23:43:40.529772065Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.3","io.kubernetes.cri.sandbox-id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","io.kubernetes.cri.sandbox-name":"kube-proxy-wv6f7","io.kubernetes.cri.sandbox-namespace":"kube-sy
stem","io.kubernetes.cri.sandbox-uid":"7ab3ac5b-5a0e-462b-a171-08f507184dfa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","pid":1160,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b/rootfs","created":"2023-11-08T23:43:18.813882118Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-400359_af28ec4ee73fcf841ab21630a0a61078","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox
-name":"kube-scheduler-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"af28ec4ee73fcf841ab21630a0a61078"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","pid":1838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186/rootfs","created":"2023-11-08T23:43:41.837718349Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_01aed977-1439-433c-b8b1-869c92
fcd9e2","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"01aed977-1439-433c-b8b1-869c92fcd9e2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086","pid":1198,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086/rootfs","created":"2023-11-08T23:43:19.509573182Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.3","io.kubernetes.cri.sandbox-id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-40
0359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"782fbbe1f7d627cd92711fb14a0b0813"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2","pid":1272,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2/rootfs","created":"2023-11-08T23:43:19.928879069Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.3","io.kubernetes.cri.sandbox-id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.
cri.sandbox-uid":"926dd51d8b9a510a42b3d2d730469c12"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","pid":1169,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef/rootfs","created":"2023-11-08T23:43:18.854841205Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-400359_926dd51d8b9a510a42b3d2d730469c12","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-con
troller-manager-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"926dd51d8b9a510a42b3d2d730469c12"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9","pid":1308,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9/rootfs","created":"2023-11-08T23:43:20.119265886Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.3","io.kubernetes.cri.sandbox-id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernete
s.cri.sandbox-uid":"af28ec4ee73fcf841ab21630a0a61078"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f","pid":1923,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f/rootfs","created":"2023-11-08T23:43:43.423326377Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"01aed977-1439-433c-b8b1-869c92fcd9e2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id
":"e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5","pid":1870,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5/rootfs","created":"2023-11-08T23:43:42.0245694Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-tqvtr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b03be54f-57e6-4247-84ba-9545f9b1b4ed"},"owner":"root"}]
	I1108 23:44:03.616807  213888 cri.go:126] list returned 14 containers
	I1108 23:44:03.616824  213888 cri.go:129] container: {ID:0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9 Status:running}
	I1108 23:44:03.616850  213888 cri.go:131] skipping 0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9 - not in ps
	I1108 23:44:03.616857  213888 cri.go:129] container: {ID:127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1 Status:running}
	I1108 23:44:03.616865  213888 cri.go:131] skipping 127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1 - not in ps
	I1108 23:44:03.616871  213888 cri.go:129] container: {ID:46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 Status:running}
	I1108 23:44:03.616879  213888 cri.go:135] skipping {46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 running}: state = "running", want "paused"
	I1108 23:44:03.616892  213888 cri.go:129] container: {ID:523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 Status:running}
	I1108 23:44:03.616900  213888 cri.go:131] skipping 523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 - not in ps
	I1108 23:44:03.616906  213888 cri.go:129] container: {ID:8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44 Status:running}
	I1108 23:44:03.616913  213888 cri.go:131] skipping 8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44 - not in ps
	I1108 23:44:03.616919  213888 cri.go:129] container: {ID:998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 Status:running}
	I1108 23:44:03.616927  213888 cri.go:135] skipping {998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 running}: state = "running", want "paused"
	I1108 23:44:03.616934  213888 cri.go:129] container: {ID:9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b Status:running}
	I1108 23:44:03.616941  213888 cri.go:131] skipping 9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b - not in ps
	I1108 23:44:03.616947  213888 cri.go:129] container: {ID:9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186 Status:running}
	I1108 23:44:03.616954  213888 cri.go:131] skipping 9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186 - not in ps
	I1108 23:44:03.616959  213888 cri.go:129] container: {ID:a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086 Status:running}
	I1108 23:44:03.616963  213888 cri.go:135] skipping {a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086 running}: state = "running", want "paused"
	I1108 23:44:03.616967  213888 cri.go:129] container: {ID:b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 Status:running}
	I1108 23:44:03.616973  213888 cri.go:135] skipping {b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 running}: state = "running", want "paused"
	I1108 23:44:03.616980  213888 cri.go:129] container: {ID:ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef Status:running}
	I1108 23:44:03.616988  213888 cri.go:131] skipping ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef - not in ps
	I1108 23:44:03.616993  213888 cri.go:129] container: {ID:daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 Status:running}
	I1108 23:44:03.617001  213888 cri.go:135] skipping {daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 running}: state = "running", want "paused"
	I1108 23:44:03.617019  213888 cri.go:129] container: {ID:db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f Status:running}
	I1108 23:44:03.617027  213888 cri.go:135] skipping {db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f running}: state = "running", want "paused"
	I1108 23:44:03.617034  213888 cri.go:129] container: {ID:e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 Status:running}
	I1108 23:44:03.617041  213888 cri.go:135] skipping {e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 running}: state = "running", want "paused"
	I1108 23:44:03.617112  213888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 23:44:03.629140  213888 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 23:44:03.629156  213888 kubeadm.go:636] restartCluster start
	I1108 23:44:03.629300  213888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 23:44:03.640035  213888 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:03.640634  213888 kubeconfig.go:92] found "functional-400359" server: "https://192.168.39.189:8441"
	I1108 23:44:03.641989  213888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 23:44:03.652731  213888 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1108 23:44:03.652746  213888 kubeadm.go:1128] stopping kube-system containers ...
	I1108 23:44:03.652762  213888 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1108 23:44:03.652812  213888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 23:44:03.699235  213888 cri.go:89] found id: "db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f"
	I1108 23:44:03.699249  213888 cri.go:89] found id: "e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5"
	I1108 23:44:03.699251  213888 cri.go:89] found id: "998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1"
	I1108 23:44:03.699255  213888 cri.go:89] found id: "daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9"
	I1108 23:44:03.699260  213888 cri.go:89] found id: "b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2"
	I1108 23:44:03.699263  213888 cri.go:89] found id: "46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573"
	I1108 23:44:03.699265  213888 cri.go:89] found id: "a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086"
	I1108 23:44:03.699268  213888 cri.go:89] found id: ""
	I1108 23:44:03.699272  213888 cri.go:234] Stopping containers: [db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086]
	I1108 23:44:03.699323  213888 ssh_runner.go:195] Run: which crictl
	I1108 23:44:03.703856  213888 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086
	I1108 23:44:19.459008  213888 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086: (15.75506263s)
	I1108 23:44:19.459080  213888 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 23:44:19.504154  213888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 23:44:19.515266  213888 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  8 23:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Nov  8 23:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Nov  8 23:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Nov  8 23:43 /etc/kubernetes/scheduler.conf
	
	I1108 23:44:19.515346  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1108 23:44:19.524771  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1108 23:44:19.534582  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1108 23:44:19.544348  213888 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:19.544402  213888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 23:44:19.553487  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1108 23:44:19.562898  213888 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:19.562943  213888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 23:44:19.572855  213888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 23:44:19.583092  213888 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 23:44:19.583112  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:19.656656  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:20.718251  213888 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.061543708s)
	I1108 23:44:20.718274  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:20.940824  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:21.049550  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:21.155180  213888 api_server.go:52] waiting for apiserver process to appear ...
	I1108 23:44:21.155262  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:21.170827  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:21.687533  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:22.187100  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:22.201909  213888 api_server.go:72] duration metric: took 1.046727455s to wait for apiserver process to appear ...
	I1108 23:44:22.201930  213888 api_server.go:88] waiting for apiserver healthz status ...
	I1108 23:44:22.201951  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:22.202592  213888 api_server.go:269] stopped: https://192.168.39.189:8441/healthz: Get "https://192.168.39.189:8441/healthz": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:22.202621  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:22.203025  213888 api_server.go:269] stopped: https://192.168.39.189:8441/healthz: Get "https://192.168.39.189:8441/healthz": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:22.703898  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.321821  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 23:44:24.321848  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 23:44:24.321866  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.331452  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 23:44:24.331472  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 23:44:24.703560  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.710858  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 23:44:24.710888  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 23:44:25.203966  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:25.210943  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 23:44:25.210976  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 23:44:25.703512  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:25.709194  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 200:
	ok
	I1108 23:44:25.717645  213888 api_server.go:141] control plane version: v1.28.3
	I1108 23:44:25.717670  213888 api_server.go:131] duration metric: took 3.515732599s to wait for apiserver health ...
	I1108 23:44:25.717682  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:44:25.717690  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:44:25.719887  213888 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 23:44:25.721531  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 23:44:25.734492  213888 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 23:44:25.771439  213888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 23:44:25.784433  213888 system_pods.go:59] 7 kube-system pods found
	I1108 23:44:25.784465  213888 system_pods.go:61] "coredns-5dd5756b68-tqvtr" [b03be54f-57e6-4247-84ba-9545f9b1b4ed] Running
	I1108 23:44:25.784475  213888 system_pods.go:61] "etcd-functional-400359" [70bdf2a8-b999-4d46-baf3-0c9267d9d3ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 23:44:25.784489  213888 system_pods.go:61] "kube-apiserver-functional-400359" [9b2db385-150c-4599-b59e-165208edd076] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 23:44:25.784498  213888 system_pods.go:61] "kube-controller-manager-functional-400359" [e2f2bb0b-f018-4ada-bd5d-d225b097763b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 23:44:25.784504  213888 system_pods.go:61] "kube-proxy-wv6f7" [7ab3ac5b-5a0e-462b-a171-08f507184dfa] Running
	I1108 23:44:25.784511  213888 system_pods.go:61] "kube-scheduler-functional-400359" [0156fad8-02e5-40ae-a5d1-17824d5c238b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 23:44:25.784521  213888 system_pods.go:61] "storage-provisioner" [01aed977-1439-433c-b8b1-869c92fcd9e2] Running
	I1108 23:44:25.784531  213888 system_pods.go:74] duration metric: took 13.073006ms to wait for pod list to return data ...
	I1108 23:44:25.784539  213888 node_conditions.go:102] verifying NodePressure condition ...
	I1108 23:44:25.793569  213888 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 23:44:25.793597  213888 node_conditions.go:123] node cpu capacity is 2
	I1108 23:44:25.793611  213888 node_conditions.go:105] duration metric: took 9.06541ms to run NodePressure ...
	I1108 23:44:25.793633  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:26.114141  213888 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 23:44:26.120712  213888 kubeadm.go:787] kubelet initialised
	I1108 23:44:26.120723  213888 kubeadm.go:788] duration metric: took 6.565858ms waiting for restarted kubelet to initialise ...
	I1108 23:44:26.120731  213888 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:44:26.131331  213888 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:26.138144  213888 pod_ready.go:92] pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:26.138155  213888 pod_ready.go:81] duration metric: took 6.806304ms waiting for pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:26.138164  213888 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:28.164811  213888 pod_ready.go:102] pod "etcd-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:30.665514  213888 pod_ready.go:92] pod "etcd-functional-400359" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:30.665553  213888 pod_ready.go:81] duration metric: took 4.527359591s waiting for pod "etcd-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:30.665565  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:32.689403  213888 pod_ready.go:102] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:34.690254  213888 pod_ready.go:102] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:35.686775  213888 pod_ready.go:92] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:35.686791  213888 pod_ready.go:81] duration metric: took 5.021218707s waiting for pod "kube-apiserver-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:35.686800  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:37.708359  213888 pod_ready.go:102] pod "kube-controller-manager-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:40.208162  213888 pod_ready.go:102] pod "kube-controller-manager-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:41.201149  213888 pod_ready.go:97] error getting pod "kube-controller-manager-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201165  213888 pod_ready.go:81] duration metric: took 5.514358749s waiting for pod "kube-controller-manager-functional-400359" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.201176  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201204  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wv6f7" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:41.201819  213888 pod_ready.go:97] error getting pod "kube-proxy-wv6f7" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wv6f7": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201831  213888 pod_ready.go:81] duration metric: took 621.035µs waiting for pod "kube-proxy-wv6f7" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.201841  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-wv6f7" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wv6f7": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201857  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:41.202340  213888 pod_ready.go:97] error getting pod "kube-scheduler-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.202352  213888 pod_ready.go:81] duration metric: took 489.317µs waiting for pod "kube-scheduler-functional-400359" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.202362  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.202373  213888 pod_ready.go:38] duration metric: took 15.08163132s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:44:41.202390  213888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 23:44:41.213978  213888 ops.go:34] apiserver oom_adj: -16
	I1108 23:44:41.213994  213888 kubeadm.go:640] restartCluster took 37.584832416s
	I1108 23:44:41.214002  213888 kubeadm.go:406] StartCluster complete in 37.679936432s
	I1108 23:44:41.214034  213888 settings.go:142] acquiring lock: {Name:mkb2acb83ccee48e6a009b8a47bf5424e6c38acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:41.214142  213888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:44:41.215036  213888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-201782/kubeconfig: {Name:mk9c6e9f67ac12aac98932c0b45c3a0608805854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:41.215314  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 23:44:41.215404  213888 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 23:44:41.215479  213888 addons.go:69] Setting storage-provisioner=true in profile "functional-400359"
	I1108 23:44:41.215505  213888 addons.go:69] Setting default-storageclass=true in profile "functional-400359"
	I1108 23:44:41.215525  213888 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-400359"
	I1108 23:44:41.215526  213888 addons.go:231] Setting addon storage-provisioner=true in "functional-400359"
	W1108 23:44:41.215533  213888 addons.go:240] addon storage-provisioner should already be in state true
	I1108 23:44:41.215537  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:44:41.215605  213888 host.go:66] Checking if "functional-400359" exists ...
	I1108 23:44:41.215913  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.215951  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.216018  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.216055  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1108 23:44:41.216959  213888 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-400359" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.39.189:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:41.216977  213888 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.39.189:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.217012  213888 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1108 23:44:41.220368  213888 out.go:177] * Verifying Kubernetes components...
	I1108 23:44:41.222004  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:44:41.231875  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I1108 23:44:41.232530  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.233190  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.233218  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.233719  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.234280  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.234325  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.237697  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I1108 23:44:41.238255  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.238752  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.238768  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.239192  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.239445  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.244598  213888 addons.go:231] Setting addon default-storageclass=true in "functional-400359"
	W1108 23:44:41.244614  213888 addons.go:240] addon default-storageclass should already be in state true
	I1108 23:44:41.244642  213888 host.go:66] Checking if "functional-400359" exists ...
	I1108 23:44:41.245132  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.245164  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.252037  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I1108 23:44:41.252498  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.253020  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.253051  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.253456  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.253670  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.255485  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:41.257960  213888 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 23:44:41.259863  213888 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 23:44:41.259875  213888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 23:44:41.259896  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:41.261665  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I1108 23:44:41.262263  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.262840  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.262867  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.263263  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.263662  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.263878  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.263916  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.264121  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:41.264156  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.264394  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:41.264629  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:41.264831  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:41.265036  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:41.280509  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I1108 23:44:41.281054  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.281632  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.281643  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.282046  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.282278  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.284072  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:41.284406  213888 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 23:44:41.284420  213888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 23:44:41.284442  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:41.287607  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.288057  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:41.288091  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.288286  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:41.288503  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:41.288686  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:41.288836  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:41.340989  213888 node_ready.go:35] waiting up to 6m0s for node "functional-400359" to be "Ready" ...
	E1108 23:44:41.341045  213888 start.go:891] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1108 23:44:41.341073  213888 start.go:294] Unable to inject {"host.minikube.internal": 192.168.39.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1108 23:44:41.341104  213888 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I1108 23:44:41.341639  213888 node_ready.go:53] error getting node "functional-400359": Get "https://192.168.39.189:8441/api/v1/nodes/functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.341651  213888 node_ready.go:38] duration metric: took 637.211µs waiting for node "functional-400359" to be "Ready" ...
	I1108 23:44:41.344408  213888 out.go:177] 
	W1108 23:44:41.345988  213888 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-400359": Get "https://192.168.39.189:8441/api/v1/nodes/functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:41.346006  213888 out.go:239] * 
	W1108 23:44:41.346885  213888 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 23:44:41.349263  213888 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	824ed4a510711       6e38f40d628db       6 seconds ago        Exited              storage-provisioner       3                   9c7477be15957       storage-provisioner
	7921f51c4026f       10baa1ca17068       50 seconds ago       Running             kube-controller-manager   2                   ca712d9c0441a       kube-controller-manager-functional-400359
	bff1a67a2e4bc       5374347291230       52 seconds ago       Created             kube-apiserver            1                   523d23a3366a5       kube-apiserver-functional-400359
	88c140ed6030d       ead0a4a53df89       About a minute ago   Running             coredns                   1                   8005a17990fd0       coredns-5dd5756b68-tqvtr
	fb3df666c8263       bfc896cf80fba       About a minute ago   Running             kube-proxy                1                   0d0883976452b       kube-proxy-wv6f7
	1d784d6322fa7       73deb9a3f7025       About a minute ago   Running             etcd                      1                   1274367410852       etcd-functional-400359
	2faf0584a90c9       10baa1ca17068       About a minute ago   Exited              kube-controller-manager   1                   ca712d9c0441a       kube-controller-manager-functional-400359
	a06cdad021ec7       6d1b4fd1b182d       About a minute ago   Running             kube-scheduler            1                   9bb1405590c60       kube-scheduler-functional-400359
	e502430453488       ead0a4a53df89       About a minute ago   Exited              coredns                   0                   8005a17990fd0       coredns-5dd5756b68-tqvtr
	998ca340aa83f       bfc896cf80fba       About a minute ago   Exited              kube-proxy                0                   0d0883976452b       kube-proxy-wv6f7
	daf40bd6e2a8e       6d1b4fd1b182d       About a minute ago   Exited              kube-scheduler            0                   9bb1405590c60       kube-scheduler-functional-400359
	46b02dbdf3f22       73deb9a3f7025       About a minute ago   Exited              etcd                      0                   1274367410852       etcd-functional-400359
	
	* 
	* ==> containerd <==
	* -- Journal begins at Wed 2023-11-08 23:42:35 UTC, ends at Wed 2023-11-08 23:45:11 UTC. --
	Nov 08 23:45:05 functional-400359 containerd[2683]: time="2023-11-08T23:45:05.077114372Z" level=info msg="CreateContainer within sandbox \"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:3,}"
	Nov 08 23:45:05 functional-400359 containerd[2683]: time="2023-11-08T23:45:05.112896758Z" level=info msg="CreateContainer within sandbox \"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186\" for &ContainerMetadata{Name:storage-provisioner,Attempt:3,} returns container id \"824ed4a51071156e47d1202f5d0c470369342d44f391048bf2efb68837cdac0d\""
	Nov 08 23:45:05 functional-400359 containerd[2683]: time="2023-11-08T23:45:05.114162561Z" level=info msg="StartContainer for \"824ed4a51071156e47d1202f5d0c470369342d44f391048bf2efb68837cdac0d\""
	Nov 08 23:45:05 functional-400359 containerd[2683]: time="2023-11-08T23:45:05.216241019Z" level=info msg="StartContainer for \"824ed4a51071156e47d1202f5d0c470369342d44f391048bf2efb68837cdac0d\" returns successfully"
	Nov 08 23:45:05 functional-400359 containerd[2683]: time="2023-11-08T23:45:05.259232197Z" level=info msg="shim disconnected" id=824ed4a51071156e47d1202f5d0c470369342d44f391048bf2efb68837cdac0d namespace=k8s.io
	Nov 08 23:45:05 functional-400359 containerd[2683]: time="2023-11-08T23:45:05.259563774Z" level=warning msg="cleaning up after shim disconnected" id=824ed4a51071156e47d1202f5d0c470369342d44f391048bf2efb68837cdac0d namespace=k8s.io
	Nov 08 23:45:05 functional-400359 containerd[2683]: time="2023-11-08T23:45:05.259629170Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 08 23:45:05 functional-400359 containerd[2683]: time="2023-11-08T23:45:05.354846557Z" level=info msg="RemoveContainer for \"76666ef4714482e565dfebdae2cfc50cdff1ac24e59143795efb2b5476b80602\""
	Nov 08 23:45:05 functional-400359 containerd[2683]: time="2023-11-08T23:45:05.366188180Z" level=info msg="RemoveContainer for \"76666ef4714482e565dfebdae2cfc50cdff1ac24e59143795efb2b5476b80602\" returns successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.046988694Z" level=info msg="Kill container \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.122784712Z" level=info msg="shim disconnected" id=dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.122833279Z" level=warning msg="cleaning up after shim disconnected" id=dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.122842119Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.153972486Z" level=info msg="StopContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" returns successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.156265546Z" level=info msg="StopPodSandbox for \"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.156411974Z" level=info msg="Container to stop \"bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0\" must be in running or unknown state, current state \"CONTAINER_CREATED\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.156598790Z" level=info msg="Container to stop \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.208231957Z" level=info msg="shim disconnected" id=523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.208338377Z" level=warning msg="cleaning up after shim disconnected" id=523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.208351190Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.230602871Z" level=info msg="TearDown network for sandbox \"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0\" successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.230747079Z" level=info msg="StopPodSandbox for \"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0\" returns successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.373793669Z" level=info msg="RemoveContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.381845817Z" level=info msg="RemoveContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" returns successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.382697295Z" level=error msg="ContainerStatus for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\": not found"
	
	* 
	* ==> coredns [88c140ed6030d22284aaafb49382d15ef7da52d8beb9e058c36ea698c2910d04] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57342 - 44358 "HINFO IN 4361793349757605016.248109365602167116. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.135909373s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: unknown (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: unknown (get namespaces)
	
	* 
	* ==> coredns [e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51534 - 35900 "HINFO IN 2585345581505525764.4555830120890176857. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031001187s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.156846] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.062315] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.304325] systemd-fstab-generator[561]: Ignoring "noauto" for root device
	[  +0.112180] systemd-fstab-generator[572]: Ignoring "noauto" for root device
	[  +0.151842] systemd-fstab-generator[585]: Ignoring "noauto" for root device
	[  +0.124353] systemd-fstab-generator[596]: Ignoring "noauto" for root device
	[  +0.268439] systemd-fstab-generator[623]: Ignoring "noauto" for root device
	[  +6.156386] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[Nov 8 23:43] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +9.282190] systemd-fstab-generator[1362]: Ignoring "noauto" for root device
	[ +18.264010] systemd-fstab-generator[2015]: Ignoring "noauto" for root device
	[  +0.177052] systemd-fstab-generator[2026]: Ignoring "noauto" for root device
	[  +0.171180] systemd-fstab-generator[2039]: Ignoring "noauto" for root device
	[  +0.169893] systemd-fstab-generator[2050]: Ignoring "noauto" for root device
	[  +0.296549] systemd-fstab-generator[2076]: Ignoring "noauto" for root device
	[Nov 8 23:44] systemd-fstab-generator[2615]: Ignoring "noauto" for root device
	[  +0.147087] systemd-fstab-generator[2626]: Ignoring "noauto" for root device
	[  +0.171247] systemd-fstab-generator[2639]: Ignoring "noauto" for root device
	[  +0.165487] systemd-fstab-generator[2650]: Ignoring "noauto" for root device
	[  +0.295897] systemd-fstab-generator[2676]: Ignoring "noauto" for root device
	[ +19.128891] systemd-fstab-generator[3485]: Ignoring "noauto" for root device
	[ +15.032820] kauditd_printk_skb: 23 callbacks suppressed
	
	* 
	* ==> etcd [1d784d6322fa72bf1ea8c9873171f75a644fcdac3d60a60b7253cea2aad58484] <==
	* {"level":"info","ts":"2023-11-08T23:44:10.907861Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-08T23:44:10.907973Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-11-08T23:44:10.908286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a switched to configuration voters=(8048648980531676538)"}
	{"level":"info","ts":"2023-11-08T23:44:10.908344Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","added-peer-id":"6fb28b9aae66857a","added-peer-peer-urls":["https://192.168.39.189:2380"]}
	{"level":"info","ts":"2023-11-08T23:44:10.908546Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:44:10.908577Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:44:10.919242Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:10.919299Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:10.919177Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-08T23:44:10.920701Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-08T23:44:10.920863Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6fb28b9aae66857a","initial-advertise-peer-urls":["https://192.168.39.189:2380"],"listen-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.189:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-08T23:44:12.571328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a received MsgPreVoteResp from 6fb28b9aae66857a at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became candidate at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.571747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a received MsgVoteResp from 6fb28b9aae66857a at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.571885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became leader at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.572003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6fb28b9aae66857a elected leader 6fb28b9aae66857a at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.574123Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6fb28b9aae66857a","local-member-attributes":"{Name:functional-400359 ClientURLs:[https://192.168.39.189:2379]}","request-path":"/0/members/6fb28b9aae66857a/attributes","cluster-id":"f0bdb053fd9e03ec","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-08T23:44:12.574193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:44:12.575568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T23:44:12.575581Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T23:44:12.57599Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T23:44:12.576127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:44:12.580777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.189:2379"}
	
	* 
	* ==> etcd [46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573] <==
	* {"level":"info","ts":"2023-11-08T23:43:21.2639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:43:21.265203Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.189:2379"}
	{"level":"info","ts":"2023-11-08T23:43:21.264037Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.264104Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:43:21.268038Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T23:43:21.273657Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T23:43:21.27674Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T23:43:21.306896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.332049Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.332311Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:43.658034Z","caller":"traceutil/trace.go:171","msg":"trace[1655151050] linearizableReadLoop","detail":"{readStateIndex:436; appliedIndex:435; }","duration":"158.288056ms","start":"2023-11-08T23:43:43.499691Z","end":"2023-11-08T23:43:43.657979Z","steps":["trace[1655151050] 'read index received'  (duration: 158.050466ms)","trace[1655151050] 'applied index is now lower than readState.Index'  (duration: 237.256µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-08T23:43:43.658216Z","caller":"traceutil/trace.go:171","msg":"trace[1004018470] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"165.867105ms","start":"2023-11-08T23:43:43.492343Z","end":"2023-11-08T23:43:43.65821Z","steps":["trace[1004018470] 'process raft request'  (duration: 165.460392ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-08T23:43:43.659133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.382515ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2023-11-08T23:43:43.659215Z","caller":"traceutil/trace.go:171","msg":"trace[1204654578] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:419; }","duration":"159.531169ms","start":"2023-11-08T23:43:43.499663Z","end":"2023-11-08T23:43:43.659194Z","steps":["trace[1204654578] 'agreement among raft nodes before linearized reading'  (duration: 158.722284ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T23:43:49.836017Z","caller":"traceutil/trace.go:171","msg":"trace[1640228342] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"142.995238ms","start":"2023-11-08T23:43:49.693Z","end":"2023-11-08T23:43:49.835995Z","steps":["trace[1640228342] 'process raft request'  (duration: 142.737466ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T23:44:09.257705Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-11-08T23:44:09.257894Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-400359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	{"level":"warn","ts":"2023-11-08T23:44:09.258128Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.258264Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.273807Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.274055Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"info","ts":"2023-11-08T23:44:09.274266Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6fb28b9aae66857a","current-leader-member-id":"6fb28b9aae66857a"}
	{"level":"info","ts":"2023-11-08T23:44:09.277371Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:09.277689Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:09.277704Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-400359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	
	* 
	* ==> kernel <==
	*  23:45:12 up 2 min,  0 users,  load average: 1.16, 0.72, 0.29
	Linux functional-400359 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0] <==
	* 
	* ==> kube-controller-manager [2faf0584a90c98fa3ae503339949f6fdc901e881c318c3b0b4ca3323123ba1a0] <==
	* I1108 23:44:10.838065       1 serving.go:348] Generated self-signed cert in-memory
	I1108 23:44:11.452649       1 controllermanager.go:189] "Starting" version="v1.28.3"
	I1108 23:44:11.452696       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 23:44:11.454751       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1108 23:44:11.455029       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 23:44:11.455309       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 23:44:11.455704       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1108 23:44:11.475414       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I1108 23:44:11.576258       1 shared_informer.go:318] Caches are synced for tokens
	I1108 23:44:12.801347       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1108 23:44:12.802296       1 cleaner.go:83] "Starting CSR cleaner controller"
	I1108 23:44:12.899559       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I1108 23:44:12.899798       1 namespace_controller.go:197] "Starting namespace controller"
	I1108 23:44:12.900091       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I1108 23:44:12.926665       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I1108 23:44:12.927319       1 stateful_set.go:161] "Starting stateful set controller"
	I1108 23:44:12.927524       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I1108 23:44:12.935324       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1108 23:44:12.935710       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I1108 23:44:12.936165       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	F1108 23:44:12.956649       1 client_builder_dynamic.go:174] Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.39.189:8441: connect: connection refused
	
	* 
	* ==> kube-controller-manager [7921f51c4026fd4eadeac9dbccfa803fc415bc1ed99e900bd95f598a614d8315] <==
	* I1108 23:44:36.950272       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"functional-400359\" does not exist"
	I1108 23:44:36.959129       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 23:44:36.963962       1 shared_informer.go:318] Caches are synced for GC
	I1108 23:44:36.986020       1 shared_informer.go:318] Caches are synced for daemon sets
	I1108 23:44:36.993585       1 shared_informer.go:318] Caches are synced for node
	I1108 23:44:36.993740       1 range_allocator.go:174] "Sending events to api server"
	I1108 23:44:36.993795       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1108 23:44:36.993812       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1108 23:44:36.993931       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1108 23:44:36.998585       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1108 23:44:37.008283       1 shared_informer.go:318] Caches are synced for attach detach
	I1108 23:44:37.022208       1 shared_informer.go:318] Caches are synced for taint
	I1108 23:44:37.022353       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1108 23:44:37.022927       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-400359"
	I1108 23:44:37.023049       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1108 23:44:37.023069       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1108 23:44:37.023085       1 taint_manager.go:211] "Sending events to api server"
	I1108 23:44:37.024141       1 event.go:307] "Event occurred" object="functional-400359" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-400359 event: Registered Node functional-400359 in Controller"
	I1108 23:44:37.024519       1 shared_informer.go:318] Caches are synced for persistent volume
	I1108 23:44:37.048147       1 shared_informer.go:318] Caches are synced for TTL
	I1108 23:44:37.409049       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 23:44:37.413937       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 23:44:37.414048       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	E1108 23:45:06.961011       1 resource_quota_controller.go:440] failed to discover resources: Get "https://192.168.39.189:8441/api": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:45:07.410557       1 garbagecollector.go:818] "failed to discover preferred resources" error="Get \"https://192.168.39.189:8441/api\": dial tcp 192.168.39.189:8441: connect: connection refused"
	
	* 
	* ==> kube-proxy [998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1] <==
	* I1108 23:43:40.754980       1 server_others.go:69] "Using iptables proxy"
	I1108 23:43:40.769210       1 node.go:141] Successfully retrieved node IP: 192.168.39.189
	I1108 23:43:40.838060       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 23:43:40.838106       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 23:43:40.841931       1 server_others.go:152] "Using iptables Proxier"
	I1108 23:43:40.842026       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 23:43:40.842300       1 server.go:846] "Version info" version="v1.28.3"
	I1108 23:43:40.842337       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 23:43:40.843102       1 config.go:188] "Starting service config controller"
	I1108 23:43:40.843156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 23:43:40.843175       1 config.go:97] "Starting endpoint slice config controller"
	I1108 23:43:40.843178       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 23:43:40.843838       1 config.go:315] "Starting node config controller"
	I1108 23:43:40.843878       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 23:43:40.943579       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 23:43:40.943667       1 shared_informer.go:318] Caches are synced for service config
	I1108 23:43:40.943937       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [fb3df666c8263c19fd9a028191dcb6e116547d67a9bf7f535ab103998f60679d] <==
	* I1108 23:44:13.012381       1 shared_informer.go:311] Waiting for caches to sync for node config
	W1108 23:44:13.012621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.012810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.013169       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.189:8441: connect: connection refused' (may retry after sleeping)
	W1108 23:44:13.815291       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.815363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:13.950038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.950102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:14.326340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:14.326643       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:15.820268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:15.820340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:16.787304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:16.787347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:17.093198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:17.093270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:19.899967       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:19.900010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:20.381161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:20.381245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:24.387034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	E1108 23:44:24.387290       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	I1108 23:44:29.107551       1 shared_informer.go:318] Caches are synced for service config
	I1108 23:44:29.513134       1 shared_informer.go:318] Caches are synced for node config
	I1108 23:44:35.808555       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a06cdad021ec7e1e28779a525beede6288ae5f847a64e005969e95c7cf80f00a] <==
	* I1108 23:44:12.860548       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1108 23:44:12.860643       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1108 23:44:12.860727       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 23:44:12.864294       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 23:44:12.864532       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 23:44:12.864566       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 23:44:12.864879       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 23:44:12.961705       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1108 23:44:12.965186       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 23:44:12.965350       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1108 23:44:24.314857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E1108 23:44:24.314957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E1108 23:44:24.319832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E1108 23:44:24.320160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods)
	E1108 23:44:24.320904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E1108 23:44:24.321298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services)
	E1108 23:44:24.321419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E1108 23:44:24.322244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E1108 23:44:24.322300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces)
	E1108 23:44:24.322320       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E1108 23:44:24.324606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E1108 23:44:24.328639       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes)
	E1108 23:44:24.328706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E1108 23:44:24.328951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E1108 23:44:24.401809       1 reflector.go:147] pkg/authentication/request/headerrequest/requestheader_controller.go:172: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> kube-scheduler [daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9] <==
	* E1108 23:43:23.555057       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 23:43:23.555310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 23:43:23.555637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 23:43:24.357554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 23:43:24.357652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 23:43:24.363070       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 23:43:24.363147       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 23:43:24.439814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 23:43:24.439863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 23:43:24.511419       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 23:43:24.511725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 23:43:24.521064       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1108 23:43:24.521357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1108 23:43:24.636054       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 23:43:24.636113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1108 23:43:24.742651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 23:43:24.742701       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 23:43:24.766583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 23:43:24.766665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 23:43:24.821852       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 23:43:24.821977       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1108 23:43:26.911793       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 23:44:09.072908       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1108 23:44:09.073170       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1108 23:44:09.073383       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 23:42:35 UTC, ends at Wed 2023-11-08 23:45:12 UTC. --
	Nov 08 23:45:05 functional-400359 kubelet[3491]: E1108 23:45:05.390784    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:05 functional-400359 kubelet[3491]: E1108 23:45:05.390850    3491 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Nov 08 23:45:10 functional-400359 kubelet[3491]: E1108 23:45:10.622848    3491 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"storage-provisioner.1795ca8194a7071e", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"storage-provisioner", UID:"01aed977-1439-433c-b8b1-869c92fcd9e2", APIVersion:"v1", ResourceVersion:"444", FieldPath:"spec.containers{storage-provisioner}"}, Reason:"Pulled", Message:"Container image \"gcr.io/k8s-minikube/storage-provisioner:v5\" already present on machine", Sourc
e:v1.EventSource{Component:"kubelet", Host:"functional-400359"}, FirstTimestamp:time.Date(2023, time.November, 8, 23, 44, 52, 295796510, time.Local), LastTimestamp:time.Date(2023, time.November, 8, 23, 44, 52, 295796510, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"functional-400359"}': 'Post "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events": dial tcp 192.168.39.189:8441: connect: connection refused'(may retry after sleeping)
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.083010    3491 status_manager.go:853] "Failed to get status for pod" podUID="926dd51d8b9a510a42b3d2d730469c12" pod="kube-system/kube-controller-manager-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.083163    3491 status_manager.go:853] "Failed to get status for pod" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: E1108 23:45:11.183790    3491 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused" interval="7s"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.237513    3491 status_manager.go:853] "Failed to get status for pod" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.237974    3491 status_manager.go:853] "Failed to get status for pod" podUID="782fbbe1f7d627cd92711fb14a0b0813" pod="kube-system/kube-apiserver-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.238185    3491 status_manager.go:853] "Failed to get status for pod" podUID="926dd51d8b9a510a42b3d2d730469c12" pod="kube-system/kube-controller-manager-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.364699    3491 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782fbbe1f7d627cd92711fb14a0b0813-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "782fbbe1f7d627cd92711fb14a0b0813" (UID: "782fbbe1f7d627cd92711fb14a0b0813"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.364698    3491 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/782fbbe1f7d627cd92711fb14a0b0813-ca-certs\") pod \"782fbbe1f7d627cd92711fb14a0b0813\" (UID: \"782fbbe1f7d627cd92711fb14a0b0813\") "
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.364984    3491 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/782fbbe1f7d627cd92711fb14a0b0813-usr-share-ca-certificates\") pod \"782fbbe1f7d627cd92711fb14a0b0813\" (UID: \"782fbbe1f7d627cd92711fb14a0b0813\") "
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.365037    3491 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/782fbbe1f7d627cd92711fb14a0b0813-k8s-certs\") pod \"782fbbe1f7d627cd92711fb14a0b0813\" (UID: \"782fbbe1f7d627cd92711fb14a0b0813\") "
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.365034    3491 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782fbbe1f7d627cd92711fb14a0b0813-usr-share-ca-certificates" (OuterVolumeSpecName: "usr-share-ca-certificates") pod "782fbbe1f7d627cd92711fb14a0b0813" (UID: "782fbbe1f7d627cd92711fb14a0b0813"). InnerVolumeSpecName "usr-share-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.365058    3491 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/782fbbe1f7d627cd92711fb14a0b0813-k8s-certs" (OuterVolumeSpecName: "k8s-certs") pod "782fbbe1f7d627cd92711fb14a0b0813" (UID: "782fbbe1f7d627cd92711fb14a0b0813"). InnerVolumeSpecName "k8s-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.365321    3491 reconciler_common.go:300] "Volume detached for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/782fbbe1f7d627cd92711fb14a0b0813-usr-share-ca-certificates\") on node \"functional-400359\" DevicePath \"\""
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.365377    3491 reconciler_common.go:300] "Volume detached for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/782fbbe1f7d627cd92711fb14a0b0813-k8s-certs\") on node \"functional-400359\" DevicePath \"\""
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.365391    3491 reconciler_common.go:300] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/782fbbe1f7d627cd92711fb14a0b0813-ca-certs\") on node \"functional-400359\" DevicePath \"\""
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.372243    3491 scope.go:117] "RemoveContainer" containerID="dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.375132    3491 status_manager.go:853] "Failed to get status for pod" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.375692    3491 status_manager.go:853] "Failed to get status for pod" podUID="782fbbe1f7d627cd92711fb14a0b0813" pod="kube-system/kube-apiserver-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.375977    3491 status_manager.go:853] "Failed to get status for pod" podUID="926dd51d8b9a510a42b3d2d730469c12" pod="kube-system/kube-controller-manager-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.382174    3491 scope.go:117] "RemoveContainer" containerID="dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: E1108 23:45:11.383139    3491 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\": not found" containerID="dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b"
	Nov 08 23:45:11 functional-400359 kubelet[3491]: I1108 23:45:11.383221    3491 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b"} err="failed to get container status \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\": not found"
	
	* 
	* ==> storage-provisioner [824ed4a51071156e47d1202f5d0c470369342d44f391048bf2efb68837cdac0d] <==
	* I1108 23:45:05.218771       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 23:45:05.220296       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 23:45:12.031243  214261 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	E1108 23:45:12.214560  214261 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-08T23:45:12Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2023-11-08T23:45:12Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: describe nodes, kube-apiserver [bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0]

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-400359 -n functional-400359
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-400359 -n functional-400359: exit status 2 (267.87518ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-400359" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (2.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 logs --file /tmp/TestFunctionalserialLogsFileCmd1305703579/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 logs --file /tmp/TestFunctionalserialLogsFileCmd1305703579/001/logs.txt: (1.451687291s)
functional_test.go:1251: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 23:45:15.214442  214350 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8441 was refused - did you specify the right host or port?\n\n** /stderr **"
	E1108 23:45:15.390604  214350 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-08T23:45:15Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2023-11-08T23:45:15Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: describe nodes, kube-apiserver [bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0]

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-400359 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-400359 apply -f testdata/invalidsvc.yaml: exit status 1 (66.739977ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.39.189:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-400359 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (6.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-400359 replace --force -f testdata/mysql.yaml
functional_test.go:1789: (dbg) Non-zero exit: kubectl --context functional-400359 replace --force -f testdata/mysql.yaml: exit status 1 (67.823067ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.39.189:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1791: failed to kubectl replace mysql: args "kubectl --context functional-400359 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-400359 -n functional-400359
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-400359 -n functional-400359: exit status 2 (452.940954ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 logs -n 25: (5.570588871s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| start   | -p functional-400359                                                     | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | minikube-local-cache-test:functional-400359                              |                   |         |         |                     |                     |
	| cache   | functional-400359 cache delete                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | minikube-local-cache-test:functional-400359                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	| ssh     | functional-400359 ssh sudo                                               | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-400359                                                        | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-400359 ssh                                                    | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-400359 cache reload                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	| ssh     | functional-400359 ssh                                                    | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-400359 kubectl --                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | --context functional-400359                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-400359                                                     | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	| config  | functional-400359 config unset                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| license |                                                                          | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	| config  | functional-400359 config get                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-400359 config set                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|         | cpus 2                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-400359 ssh sudo                                               | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC |                     |
	|         | systemctl is-active docker                                               |                   |         |         |                     |                     |
	| config  | functional-400359 config get                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-400359 config unset                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-400359 config get                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 23:43:59
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 23:43:59.599157  213888 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:43:59.599412  213888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:43:59.599416  213888 out.go:309] Setting ErrFile to fd 2...
	I1108 23:43:59.599420  213888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:43:59.599606  213888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
	I1108 23:43:59.600217  213888 out.go:303] Setting JSON to false
	I1108 23:43:59.601119  213888 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":23194,"bootTime":1699463846,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 23:43:59.601189  213888 start.go:138] virtualization: kvm guest
	I1108 23:43:59.603447  213888 out.go:177] * [functional-400359] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 23:43:59.605356  213888 notify.go:220] Checking for updates...
	I1108 23:43:59.605376  213888 out.go:177]   - MINIKUBE_LOCATION=17586
	I1108 23:43:59.607074  213888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:43:59.608704  213888 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:43:59.610319  213888 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	I1108 23:43:59.611947  213888 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 23:43:59.613523  213888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 23:43:59.615400  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:43:59.615477  213888 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:43:59.615864  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:43:59.615909  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:43:59.631683  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I1108 23:43:59.632150  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:43:59.632691  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:43:59.632708  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:43:59.633075  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:43:59.633250  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.666922  213888 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 23:43:59.668639  213888 start.go:298] selected driver: kvm2
	I1108 23:43:59.668648  213888 start.go:902] validating driver "kvm2" against &{Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400
359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:43:59.668789  213888 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 23:43:59.669167  213888 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:43:59.669241  213888 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17586-201782/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 23:43:59.685241  213888 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 23:43:59.685958  213888 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 23:43:59.686030  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:43:59.686038  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:43:59.686047  213888 start_flags.go:323] config:
	{Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:43:59.686238  213888 iso.go:125] acquiring lock: {Name:mk33479b76ec6919fe69628bcf9e99f9786f49af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:43:59.688123  213888 out.go:177] * Starting control plane node functional-400359 in cluster functional-400359
	I1108 23:43:59.689492  213888 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:43:59.689531  213888 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4
	I1108 23:43:59.689548  213888 cache.go:56] Caching tarball of preloaded images
	I1108 23:43:59.689653  213888 preload.go:174] Found /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1108 23:43:59.689661  213888 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1108 23:43:59.689851  213888 profile.go:148] Saving config to /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/config.json ...
	I1108 23:43:59.690069  213888 start.go:365] acquiring machines lock for functional-400359: {Name:mkc58a906fd9c58de0776efcd0f08335945567ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 23:43:59.690115  213888 start.go:369] acquired machines lock for "functional-400359" in 32.532µs
	I1108 23:43:59.690130  213888 start.go:96] Skipping create...Using existing machine configuration
	I1108 23:43:59.690134  213888 fix.go:54] fixHost starting: 
	I1108 23:43:59.690432  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:43:59.690465  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:43:59.706016  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I1108 23:43:59.706457  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:43:59.706983  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:43:59.707003  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:43:59.707316  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:43:59.707534  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.707715  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:43:59.709629  213888 fix.go:102] recreateIfNeeded on functional-400359: state=Running err=<nil>
	W1108 23:43:59.709665  213888 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 23:43:59.711868  213888 out.go:177] * Updating the running kvm2 "functional-400359" VM ...
	I1108 23:43:59.713307  213888 machine.go:88] provisioning docker machine ...
	I1108 23:43:59.713332  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.713637  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.713880  213888 buildroot.go:166] provisioning hostname "functional-400359"
	I1108 23:43:59.713899  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.714053  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:43:59.716647  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.717013  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:43:59.717073  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.717195  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:43:59.717406  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.717589  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.717824  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:43:59.718013  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:43:59.718360  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:43:59.718370  213888 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-400359 && echo "functional-400359" | sudo tee /etc/hostname
	I1108 23:43:59.863990  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-400359
	
	I1108 23:43:59.864012  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:43:59.866908  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.867252  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:43:59.867363  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.867442  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:43:59.867690  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.867850  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.867996  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:43:59.868145  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:43:59.868492  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:43:59.868503  213888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-400359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-400359/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-400359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 23:43:59.999382  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 23:43:59.999410  213888 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17586-201782/.minikube CaCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17586-201782/.minikube}
	I1108 23:43:59.999434  213888 buildroot.go:174] setting up certificates
	I1108 23:43:59.999445  213888 provision.go:83] configureAuth start
	I1108 23:43:59.999455  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.999781  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:00.002662  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.002978  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.003014  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.003248  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.005651  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.006085  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.006106  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.006287  213888 provision.go:138] copyHostCerts
	I1108 23:44:00.006374  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem, removing ...
	I1108 23:44:00.006389  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem
	I1108 23:44:00.006451  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem (1078 bytes)
	I1108 23:44:00.006581  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem, removing ...
	I1108 23:44:00.006587  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem
	I1108 23:44:00.006617  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem (1123 bytes)
	I1108 23:44:00.006719  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem, removing ...
	I1108 23:44:00.006724  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem
	I1108 23:44:00.006742  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem (1679 bytes)
	I1108 23:44:00.006784  213888 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem org=jenkins.functional-400359 san=[192.168.39.189 192.168.39.189 localhost 127.0.0.1 minikube functional-400359]
	I1108 23:44:00.203873  213888 provision.go:172] copyRemoteCerts
	I1108 23:44:00.203931  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 23:44:00.203956  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.206797  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.207094  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.207119  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.207305  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.207516  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.207692  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.207814  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.301445  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 23:44:00.331684  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 23:44:00.361187  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 23:44:00.388214  213888 provision.go:86] duration metric: configureAuth took 388.751766ms
	I1108 23:44:00.388241  213888 buildroot.go:189] setting minikube options for container-runtime
	I1108 23:44:00.388477  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:44:00.388484  213888 machine.go:91] provisioned docker machine in 675.168638ms
	I1108 23:44:00.388492  213888 start.go:300] post-start starting for "functional-400359" (driver="kvm2")
	I1108 23:44:00.388500  213888 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 23:44:00.388535  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.388924  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 23:44:00.388948  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.391561  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.391940  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.391967  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.392105  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.392316  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.392453  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.392611  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.488199  213888 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 23:44:00.492976  213888 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 23:44:00.492992  213888 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-201782/.minikube/addons for local assets ...
	I1108 23:44:00.493051  213888 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-201782/.minikube/files for local assets ...
	I1108 23:44:00.493113  213888 filesync.go:149] local asset: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem -> 2089632.pem in /etc/ssl/certs
	I1108 23:44:00.493174  213888 filesync.go:149] local asset: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/test/nested/copy/208963/hosts -> hosts in /etc/test/nested/copy/208963
	I1108 23:44:00.493206  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/208963
	I1108 23:44:00.501656  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem --> /etc/ssl/certs/2089632.pem (1708 bytes)
	I1108 23:44:00.525422  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/test/nested/copy/208963/hosts --> /etc/test/nested/copy/208963/hosts (40 bytes)
	I1108 23:44:00.548996  213888 start.go:303] post-start completed in 160.490436ms
	I1108 23:44:00.549028  213888 fix.go:56] fixHost completed within 858.891713ms
	I1108 23:44:00.549103  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.551962  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.552311  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.552329  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.552563  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.552735  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.552911  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.553036  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.553160  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:44:00.553504  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:44:00.553510  213888 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 23:44:00.679007  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699487040.675193612
	
	I1108 23:44:00.679025  213888 fix.go:206] guest clock: 1699487040.675193612
	I1108 23:44:00.679031  213888 fix.go:219] Guest: 2023-11-08 23:44:00.675193612 +0000 UTC Remote: 2023-11-08 23:44:00.549031363 +0000 UTC m=+1.003889169 (delta=126.162249ms)
	I1108 23:44:00.679051  213888 fix.go:190] guest clock delta is within tolerance: 126.162249ms
	I1108 23:44:00.679055  213888 start.go:83] releasing machines lock for "functional-400359", held for 988.934098ms
	I1108 23:44:00.679080  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.679402  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:00.682635  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.683021  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.683048  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.683271  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.683917  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.684098  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.684213  213888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 23:44:00.684252  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.684416  213888 ssh_runner.go:195] Run: cat /version.json
	I1108 23:44:00.684440  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.687054  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687399  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.687426  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687449  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687587  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.687788  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.687907  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.687935  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687948  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.688119  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.688118  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.688285  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.688448  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.688589  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.802586  213888 ssh_runner.go:195] Run: systemctl --version
	I1108 23:44:00.808787  213888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 23:44:00.814779  213888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 23:44:00.814850  213888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 23:44:00.824904  213888 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 23:44:00.824923  213888 start.go:472] detecting cgroup driver to use...
	I1108 23:44:00.824994  213888 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1108 23:44:00.839653  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1108 23:44:00.852631  213888 docker.go:203] disabling cri-docker service (if available) ...
	I1108 23:44:00.852687  213888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 23:44:00.865664  213888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 23:44:00.878442  213888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 23:44:01.013896  213888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 23:44:01.176298  213888 docker.go:219] disabling docker service ...
	I1108 23:44:01.176368  213888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 23:44:01.191617  213888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 23:44:01.205423  213888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 23:44:01.352320  213888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 23:44:01.505796  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 23:44:01.520373  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 23:44:01.539920  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1108 23:44:01.552198  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1108 23:44:01.564553  213888 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1108 23:44:01.564634  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1108 23:44:01.577530  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:44:01.589460  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1108 23:44:01.601621  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:44:01.615054  213888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 23:44:01.626891  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1108 23:44:01.638637  213888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 23:44:01.649235  213888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 23:44:01.660480  213888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 23:44:01.793850  213888 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1108 23:44:01.824923  213888 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1108 23:44:01.824991  213888 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1108 23:44:01.831130  213888 retry.go:31] will retry after 821.206397ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1108 23:44:02.653187  213888 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1108 23:44:02.660143  213888 start.go:540] Will wait 60s for crictl version
	I1108 23:44:02.660193  213888 ssh_runner.go:195] Run: which crictl
	I1108 23:44:02.665280  213888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 23:44:02.711632  213888 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.8
	RuntimeApiVersion:  v1
	I1108 23:44:02.711708  213888 ssh_runner.go:195] Run: containerd --version
	I1108 23:44:02.742401  213888 ssh_runner.go:195] Run: containerd --version
	I1108 23:44:02.772662  213888 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.7.8 ...
	I1108 23:44:02.774143  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:02.776902  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:02.777294  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:02.777321  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:02.777524  213888 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 23:44:02.784598  213888 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1108 23:44:02.786474  213888 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:44:02.786612  213888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:44:02.834765  213888 containerd.go:604] all images are preloaded for containerd runtime.
	I1108 23:44:02.834781  213888 containerd.go:518] Images already preloaded, skipping extraction
	I1108 23:44:02.834839  213888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:44:02.877779  213888 containerd.go:604] all images are preloaded for containerd runtime.
	I1108 23:44:02.877797  213888 cache_images.go:84] Images are preloaded, skipping loading
	I1108 23:44:02.877870  213888 ssh_runner.go:195] Run: sudo crictl info
	I1108 23:44:02.924597  213888 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1108 23:44:02.924626  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:44:02.924635  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:44:02.924644  213888 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 23:44:02.924661  213888 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8441 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-400359 NodeName:functional-400359 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubele
tConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 23:44:02.924813  213888 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-400359"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.189
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 23:44:02.924893  213888 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-400359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1108 23:44:02.924953  213888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 23:44:02.936489  213888 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 23:44:02.936562  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 23:44:02.947183  213888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I1108 23:44:02.966007  213888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 23:44:02.985587  213888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1962 bytes)
	I1108 23:44:03.005107  213888 ssh_runner.go:195] Run: grep 192.168.39.189	control-plane.minikube.internal$ /etc/hosts
	I1108 23:44:03.010099  213888 certs.go:56] Setting up /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359 for IP: 192.168.39.189
	I1108 23:44:03.010128  213888 certs.go:190] acquiring lock for shared ca certs: {Name:mk39cbc6402159d6a738802f6361f72eac5d34d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:03.010382  213888 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17586-201782/.minikube/ca.key
	I1108 23:44:03.010425  213888 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.key
	I1108 23:44:03.010497  213888 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.key
	I1108 23:44:03.010540  213888 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.key.3964182b
	I1108 23:44:03.010588  213888 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.key
	I1108 23:44:03.010739  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963.pem (1338 bytes)
	W1108 23:44:03.010780  213888 certs.go:433] ignoring /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963_empty.pem, impossibly tiny 0 bytes
	I1108 23:44:03.010790  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 23:44:03.010822  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem (1078 bytes)
	I1108 23:44:03.010853  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem (1123 bytes)
	I1108 23:44:03.010885  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem (1679 bytes)
	I1108 23:44:03.010944  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem (1708 bytes)
	I1108 23:44:03.011800  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 23:44:03.052476  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 23:44:03.084167  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 23:44:03.113455  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 23:44:03.138855  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 23:44:03.170000  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 23:44:03.203207  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 23:44:03.233030  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 23:44:03.262431  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963.pem --> /usr/share/ca-certificates/208963.pem (1338 bytes)
	I1108 23:44:03.288670  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem --> /usr/share/ca-certificates/2089632.pem (1708 bytes)
	I1108 23:44:03.317344  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 23:44:03.345150  213888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 23:44:03.367221  213888 ssh_runner.go:195] Run: openssl version
	I1108 23:44:03.373631  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2089632.pem && ln -fs /usr/share/ca-certificates/2089632.pem /etc/ssl/certs/2089632.pem"
	I1108 23:44:03.388662  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.394338  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  8 23:42 /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.394401  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.400580  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2089632.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 23:44:03.412248  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 23:44:03.425515  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.430926  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  8 23:35 /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.430990  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.437443  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 23:44:03.447837  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208963.pem && ln -fs /usr/share/ca-certificates/208963.pem /etc/ssl/certs/208963.pem"
	I1108 23:44:03.461453  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.467398  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  8 23:42 /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.467478  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.474228  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/208963.pem /etc/ssl/certs/51391683.0"
	I1108 23:44:03.487446  213888 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 23:44:03.492652  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 23:44:03.499552  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 23:44:03.507193  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 23:44:03.514236  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 23:44:03.521522  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 23:44:03.527708  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 23:44:03.534082  213888 kubeadm.go:404] StartCluster: {Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:44:03.534196  213888 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1108 23:44:03.534267  213888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 23:44:03.584679  213888 cri.go:89] found id: "db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f"
	I1108 23:44:03.584695  213888 cri.go:89] found id: "e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5"
	I1108 23:44:03.584698  213888 cri.go:89] found id: "998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1"
	I1108 23:44:03.584701  213888 cri.go:89] found id: "daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9"
	I1108 23:44:03.584704  213888 cri.go:89] found id: "b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2"
	I1108 23:44:03.584707  213888 cri.go:89] found id: "46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573"
	I1108 23:44:03.584709  213888 cri.go:89] found id: "a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086"
	I1108 23:44:03.584711  213888 cri.go:89] found id: ""
	I1108 23:44:03.584767  213888 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1108 23:44:03.616378  213888 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","pid":1604,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9/rootfs","created":"2023-11-08T23:43:40.318157335Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wv6f7_7ab3ac5b-5a0e-462b-a171-08f507184dfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wv6f7","io.kubernetes.cri.sand
box-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7ab3ac5b-5a0e-462b-a171-08f507184dfa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","pid":1110,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1/rootfs","created":"2023-11-08T23:43:18.68773069Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-400359_faaa6dec7d9cbf75400a4930b93bdc7d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes
.cri.sandbox-name":"etcd-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"faaa6dec7d9cbf75400a4930b93bdc7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573","pid":1243,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573/rootfs","created":"2023-11-08T23:43:19.79473196Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","io.kubernetes.cri.sandbox-name":"etcd-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fa
aa6dec7d9cbf75400a4930b93bdc7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","pid":1137,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0/rootfs","created":"2023-11-08T23:43:18.759582Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-400359","io.ku
bernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"782fbbe1f7d627cd92711fb14a0b0813"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","pid":1799,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44/rootfs","created":"2023-11-08T23:43:41.584597939Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-tqvtr_b03be54f-57e6-4247-84ba-9545f9b1b4ed","io.kubernetes.cri.sandbox-memory
":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-tqvtr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b03be54f-57e6-4247-84ba-9545f9b1b4ed"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1","pid":1633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1/rootfs","created":"2023-11-08T23:43:40.529772065Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.3","io.kubernetes.cri.sandbox-id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","io.kubernetes.cri.sandbox-name":"kube-proxy-wv6f7","io.kubernetes.cri.sandbox-namespace":"kube-sy
stem","io.kubernetes.cri.sandbox-uid":"7ab3ac5b-5a0e-462b-a171-08f507184dfa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","pid":1160,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b/rootfs","created":"2023-11-08T23:43:18.813882118Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-400359_af28ec4ee73fcf841ab21630a0a61078","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox
-name":"kube-scheduler-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"af28ec4ee73fcf841ab21630a0a61078"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","pid":1838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186/rootfs","created":"2023-11-08T23:43:41.837718349Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_01aed977-1439-433c-b8b1-869c92
fcd9e2","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"01aed977-1439-433c-b8b1-869c92fcd9e2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086","pid":1198,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086/rootfs","created":"2023-11-08T23:43:19.509573182Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.3","io.kubernetes.cri.sandbox-id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-40
0359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"782fbbe1f7d627cd92711fb14a0b0813"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2","pid":1272,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2/rootfs","created":"2023-11-08T23:43:19.928879069Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.3","io.kubernetes.cri.sandbox-id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.
cri.sandbox-uid":"926dd51d8b9a510a42b3d2d730469c12"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","pid":1169,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef/rootfs","created":"2023-11-08T23:43:18.854841205Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-400359_926dd51d8b9a510a42b3d2d730469c12","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-con
troller-manager-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"926dd51d8b9a510a42b3d2d730469c12"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9","pid":1308,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9/rootfs","created":"2023-11-08T23:43:20.119265886Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.3","io.kubernetes.cri.sandbox-id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernete
s.cri.sandbox-uid":"af28ec4ee73fcf841ab21630a0a61078"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f","pid":1923,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f/rootfs","created":"2023-11-08T23:43:43.423326377Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"01aed977-1439-433c-b8b1-869c92fcd9e2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id
":"e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5","pid":1870,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5/rootfs","created":"2023-11-08T23:43:42.0245694Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-tqvtr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b03be54f-57e6-4247-84ba-9545f9b1b4ed"},"owner":"root"}]
	I1108 23:44:03.616807  213888 cri.go:126] list returned 14 containers
	I1108 23:44:03.616824  213888 cri.go:129] container: {ID:0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9 Status:running}
	I1108 23:44:03.616850  213888 cri.go:131] skipping 0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9 - not in ps
	I1108 23:44:03.616857  213888 cri.go:129] container: {ID:127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1 Status:running}
	I1108 23:44:03.616865  213888 cri.go:131] skipping 127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1 - not in ps
	I1108 23:44:03.616871  213888 cri.go:129] container: {ID:46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 Status:running}
	I1108 23:44:03.616879  213888 cri.go:135] skipping {46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 running}: state = "running", want "paused"
	I1108 23:44:03.616892  213888 cri.go:129] container: {ID:523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 Status:running}
	I1108 23:44:03.616900  213888 cri.go:131] skipping 523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 - not in ps
	I1108 23:44:03.616906  213888 cri.go:129] container: {ID:8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44 Status:running}
	I1108 23:44:03.616913  213888 cri.go:131] skipping 8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44 - not in ps
	I1108 23:44:03.616919  213888 cri.go:129] container: {ID:998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 Status:running}
	I1108 23:44:03.616927  213888 cri.go:135] skipping {998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 running}: state = "running", want "paused"
	I1108 23:44:03.616934  213888 cri.go:129] container: {ID:9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b Status:running}
	I1108 23:44:03.616941  213888 cri.go:131] skipping 9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b - not in ps
	I1108 23:44:03.616947  213888 cri.go:129] container: {ID:9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186 Status:running}
	I1108 23:44:03.616954  213888 cri.go:131] skipping 9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186 - not in ps
	I1108 23:44:03.616959  213888 cri.go:129] container: {ID:a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086 Status:running}
	I1108 23:44:03.616963  213888 cri.go:135] skipping {a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086 running}: state = "running", want "paused"
	I1108 23:44:03.616967  213888 cri.go:129] container: {ID:b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 Status:running}
	I1108 23:44:03.616973  213888 cri.go:135] skipping {b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 running}: state = "running", want "paused"
	I1108 23:44:03.616980  213888 cri.go:129] container: {ID:ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef Status:running}
	I1108 23:44:03.616988  213888 cri.go:131] skipping ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef - not in ps
	I1108 23:44:03.616993  213888 cri.go:129] container: {ID:daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 Status:running}
	I1108 23:44:03.617001  213888 cri.go:135] skipping {daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 running}: state = "running", want "paused"
	I1108 23:44:03.617019  213888 cri.go:129] container: {ID:db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f Status:running}
	I1108 23:44:03.617027  213888 cri.go:135] skipping {db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f running}: state = "running", want "paused"
	I1108 23:44:03.617034  213888 cri.go:129] container: {ID:e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 Status:running}
	I1108 23:44:03.617041  213888 cri.go:135] skipping {e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 running}: state = "running", want "paused"
	I1108 23:44:03.617112  213888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 23:44:03.629140  213888 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 23:44:03.629156  213888 kubeadm.go:636] restartCluster start
	I1108 23:44:03.629300  213888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 23:44:03.640035  213888 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:03.640634  213888 kubeconfig.go:92] found "functional-400359" server: "https://192.168.39.189:8441"
	I1108 23:44:03.641989  213888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 23:44:03.652731  213888 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1108 23:44:03.652746  213888 kubeadm.go:1128] stopping kube-system containers ...
	I1108 23:44:03.652762  213888 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1108 23:44:03.652812  213888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 23:44:03.699235  213888 cri.go:89] found id: "db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f"
	I1108 23:44:03.699249  213888 cri.go:89] found id: "e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5"
	I1108 23:44:03.699251  213888 cri.go:89] found id: "998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1"
	I1108 23:44:03.699255  213888 cri.go:89] found id: "daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9"
	I1108 23:44:03.699260  213888 cri.go:89] found id: "b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2"
	I1108 23:44:03.699263  213888 cri.go:89] found id: "46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573"
	I1108 23:44:03.699265  213888 cri.go:89] found id: "a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086"
	I1108 23:44:03.699268  213888 cri.go:89] found id: ""
	I1108 23:44:03.699272  213888 cri.go:234] Stopping containers: [db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086]
	I1108 23:44:03.699323  213888 ssh_runner.go:195] Run: which crictl
	I1108 23:44:03.703856  213888 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086
	I1108 23:44:19.459008  213888 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086: (15.75506263s)
	I1108 23:44:19.459080  213888 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 23:44:19.504154  213888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 23:44:19.515266  213888 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  8 23:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Nov  8 23:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Nov  8 23:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Nov  8 23:43 /etc/kubernetes/scheduler.conf
	
	I1108 23:44:19.515346  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1108 23:44:19.524771  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1108 23:44:19.534582  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1108 23:44:19.544348  213888 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:19.544402  213888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 23:44:19.553487  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1108 23:44:19.562898  213888 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:19.562943  213888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 23:44:19.572855  213888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 23:44:19.583092  213888 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 23:44:19.583112  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:19.656656  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:20.718251  213888 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.061543708s)
	I1108 23:44:20.718274  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:20.940824  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:21.049550  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:21.155180  213888 api_server.go:52] waiting for apiserver process to appear ...
	I1108 23:44:21.155262  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:21.170827  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:21.687533  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:22.187100  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:22.201909  213888 api_server.go:72] duration metric: took 1.046727455s to wait for apiserver process to appear ...
	I1108 23:44:22.201930  213888 api_server.go:88] waiting for apiserver healthz status ...
	I1108 23:44:22.201951  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:22.202592  213888 api_server.go:269] stopped: https://192.168.39.189:8441/healthz: Get "https://192.168.39.189:8441/healthz": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:22.202621  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:22.203025  213888 api_server.go:269] stopped: https://192.168.39.189:8441/healthz: Get "https://192.168.39.189:8441/healthz": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:22.703898  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.321821  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 23:44:24.321848  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 23:44:24.321866  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.331452  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 23:44:24.331472  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 23:44:24.703560  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.710858  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 23:44:24.710888  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 23:44:25.203966  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:25.210943  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 23:44:25.210976  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 23:44:25.703512  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:25.709194  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 200:
	ok
	I1108 23:44:25.717645  213888 api_server.go:141] control plane version: v1.28.3
	I1108 23:44:25.717670  213888 api_server.go:131] duration metric: took 3.515732599s to wait for apiserver health ...
	I1108 23:44:25.717682  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:44:25.717690  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:44:25.719887  213888 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 23:44:25.721531  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 23:44:25.734492  213888 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 23:44:25.771439  213888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 23:44:25.784433  213888 system_pods.go:59] 7 kube-system pods found
	I1108 23:44:25.784465  213888 system_pods.go:61] "coredns-5dd5756b68-tqvtr" [b03be54f-57e6-4247-84ba-9545f9b1b4ed] Running
	I1108 23:44:25.784475  213888 system_pods.go:61] "etcd-functional-400359" [70bdf2a8-b999-4d46-baf3-0c9267d9d3ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 23:44:25.784489  213888 system_pods.go:61] "kube-apiserver-functional-400359" [9b2db385-150c-4599-b59e-165208edd076] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 23:44:25.784498  213888 system_pods.go:61] "kube-controller-manager-functional-400359" [e2f2bb0b-f018-4ada-bd5d-d225b097763b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 23:44:25.784504  213888 system_pods.go:61] "kube-proxy-wv6f7" [7ab3ac5b-5a0e-462b-a171-08f507184dfa] Running
	I1108 23:44:25.784511  213888 system_pods.go:61] "kube-scheduler-functional-400359" [0156fad8-02e5-40ae-a5d1-17824d5c238b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 23:44:25.784521  213888 system_pods.go:61] "storage-provisioner" [01aed977-1439-433c-b8b1-869c92fcd9e2] Running
	I1108 23:44:25.784531  213888 system_pods.go:74] duration metric: took 13.073006ms to wait for pod list to return data ...
	I1108 23:44:25.784539  213888 node_conditions.go:102] verifying NodePressure condition ...
	I1108 23:44:25.793569  213888 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 23:44:25.793597  213888 node_conditions.go:123] node cpu capacity is 2
	I1108 23:44:25.793611  213888 node_conditions.go:105] duration metric: took 9.06541ms to run NodePressure ...
	I1108 23:44:25.793633  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:26.114141  213888 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 23:44:26.120712  213888 kubeadm.go:787] kubelet initialised
	I1108 23:44:26.120723  213888 kubeadm.go:788] duration metric: took 6.565858ms waiting for restarted kubelet to initialise ...
	I1108 23:44:26.120731  213888 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:44:26.131331  213888 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:26.138144  213888 pod_ready.go:92] pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:26.138155  213888 pod_ready.go:81] duration metric: took 6.806304ms waiting for pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:26.138164  213888 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:28.164811  213888 pod_ready.go:102] pod "etcd-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:30.665514  213888 pod_ready.go:92] pod "etcd-functional-400359" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:30.665553  213888 pod_ready.go:81] duration metric: took 4.527359591s waiting for pod "etcd-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:30.665565  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:32.689403  213888 pod_ready.go:102] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:34.690254  213888 pod_ready.go:102] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:35.686775  213888 pod_ready.go:92] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:35.686791  213888 pod_ready.go:81] duration metric: took 5.021218707s waiting for pod "kube-apiserver-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:35.686800  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:37.708359  213888 pod_ready.go:102] pod "kube-controller-manager-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:40.208162  213888 pod_ready.go:102] pod "kube-controller-manager-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:41.201149  213888 pod_ready.go:97] error getting pod "kube-controller-manager-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201165  213888 pod_ready.go:81] duration metric: took 5.514358749s waiting for pod "kube-controller-manager-functional-400359" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.201176  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201204  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wv6f7" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:41.201819  213888 pod_ready.go:97] error getting pod "kube-proxy-wv6f7" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wv6f7": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201831  213888 pod_ready.go:81] duration metric: took 621.035µs waiting for pod "kube-proxy-wv6f7" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.201841  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-wv6f7" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wv6f7": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201857  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:41.202340  213888 pod_ready.go:97] error getting pod "kube-scheduler-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.202352  213888 pod_ready.go:81] duration metric: took 489.317µs waiting for pod "kube-scheduler-functional-400359" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.202362  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.202373  213888 pod_ready.go:38] duration metric: took 15.08163132s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:44:41.202390  213888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 23:44:41.213978  213888 ops.go:34] apiserver oom_adj: -16
	I1108 23:44:41.213994  213888 kubeadm.go:640] restartCluster took 37.584832416s
	I1108 23:44:41.214002  213888 kubeadm.go:406] StartCluster complete in 37.679936432s
	I1108 23:44:41.214034  213888 settings.go:142] acquiring lock: {Name:mkb2acb83ccee48e6a009b8a47bf5424e6c38acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:41.214142  213888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:44:41.215036  213888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-201782/kubeconfig: {Name:mk9c6e9f67ac12aac98932c0b45c3a0608805854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:41.215314  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 23:44:41.215404  213888 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 23:44:41.215479  213888 addons.go:69] Setting storage-provisioner=true in profile "functional-400359"
	I1108 23:44:41.215505  213888 addons.go:69] Setting default-storageclass=true in profile "functional-400359"
	I1108 23:44:41.215525  213888 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-400359"
	I1108 23:44:41.215526  213888 addons.go:231] Setting addon storage-provisioner=true in "functional-400359"
	W1108 23:44:41.215533  213888 addons.go:240] addon storage-provisioner should already be in state true
	I1108 23:44:41.215537  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:44:41.215605  213888 host.go:66] Checking if "functional-400359" exists ...
	I1108 23:44:41.215913  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.215951  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.216018  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.216055  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1108 23:44:41.216959  213888 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-400359" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.39.189:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:41.216977  213888 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.39.189:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.217012  213888 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1108 23:44:41.220368  213888 out.go:177] * Verifying Kubernetes components...
	I1108 23:44:41.222004  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:44:41.231875  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I1108 23:44:41.232530  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.233190  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.233218  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.233719  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.234280  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.234325  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.237697  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I1108 23:44:41.238255  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.238752  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.238768  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.239192  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.239445  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.244598  213888 addons.go:231] Setting addon default-storageclass=true in "functional-400359"
	W1108 23:44:41.244614  213888 addons.go:240] addon default-storageclass should already be in state true
	I1108 23:44:41.244642  213888 host.go:66] Checking if "functional-400359" exists ...
	I1108 23:44:41.245132  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.245164  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.252037  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I1108 23:44:41.252498  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.253020  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.253051  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.253456  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.253670  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.255485  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:41.257960  213888 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 23:44:41.259863  213888 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 23:44:41.259875  213888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 23:44:41.259896  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:41.261665  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I1108 23:44:41.262263  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.262840  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.262867  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.263263  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.263662  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.263878  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.263916  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.264121  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:41.264156  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.264394  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:41.264629  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:41.264831  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:41.265036  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:41.280509  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I1108 23:44:41.281054  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.281632  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.281643  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.282046  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.282278  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.284072  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:41.284406  213888 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 23:44:41.284420  213888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 23:44:41.284442  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:41.287607  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.288057  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:41.288091  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.288286  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:41.288503  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:41.288686  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:41.288836  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:41.340989  213888 node_ready.go:35] waiting up to 6m0s for node "functional-400359" to be "Ready" ...
	E1108 23:44:41.341045  213888 start.go:891] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1108 23:44:41.341073  213888 start.go:294] Unable to inject {"host.minikube.internal": 192.168.39.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1108 23:44:41.341104  213888 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I1108 23:44:41.341639  213888 node_ready.go:53] error getting node "functional-400359": Get "https://192.168.39.189:8441/api/v1/nodes/functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.341651  213888 node_ready.go:38] duration metric: took 637.211µs waiting for node "functional-400359" to be "Ready" ...
	I1108 23:44:41.344408  213888 out.go:177] 
	W1108 23:44:41.345988  213888 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-400359": Get "https://192.168.39.189:8441/api/v1/nodes/functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:41.346006  213888 out.go:239] * 
	W1108 23:44:41.346885  213888 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 23:44:41.349263  213888 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1787086a19180       5374347291230       1 second ago         Running             kube-apiserver            0                   7d9d51206fb22       kube-apiserver-functional-400359
	824ed4a510711       6e38f40d628db       13 seconds ago       Exited              storage-provisioner       3                   9c7477be15957       storage-provisioner
	7921f51c4026f       10baa1ca17068       56 seconds ago       Running             kube-controller-manager   2                   ca712d9c0441a       kube-controller-manager-functional-400359
	bff1a67a2e4bc       5374347291230       58 seconds ago       Created             kube-apiserver            1                   523d23a3366a5       kube-apiserver-functional-400359
	88c140ed6030d       ead0a4a53df89       About a minute ago   Running             coredns                   1                   8005a17990fd0       coredns-5dd5756b68-tqvtr
	fb3df666c8263       bfc896cf80fba       About a minute ago   Running             kube-proxy                1                   0d0883976452b       kube-proxy-wv6f7
	1d784d6322fa7       73deb9a3f7025       About a minute ago   Running             etcd                      1                   1274367410852       etcd-functional-400359
	2faf0584a90c9       10baa1ca17068       About a minute ago   Exited              kube-controller-manager   1                   ca712d9c0441a       kube-controller-manager-functional-400359
	a06cdad021ec7       6d1b4fd1b182d       About a minute ago   Running             kube-scheduler            1                   9bb1405590c60       kube-scheduler-functional-400359
	e502430453488       ead0a4a53df89       About a minute ago   Exited              coredns                   0                   8005a17990fd0       coredns-5dd5756b68-tqvtr
	998ca340aa83f       bfc896cf80fba       About a minute ago   Exited              kube-proxy                0                   0d0883976452b       kube-proxy-wv6f7
	daf40bd6e2a8e       6d1b4fd1b182d       About a minute ago   Exited              kube-scheduler            0                   9bb1405590c60       kube-scheduler-functional-400359
	46b02dbdf3f22       73deb9a3f7025       About a minute ago   Exited              etcd                      0                   1274367410852       etcd-functional-400359
	
	* 
	* ==> containerd <==
	* -- Journal begins at Wed 2023-11-08 23:42:35 UTC, ends at Wed 2023-11-08 23:45:18 UTC. --
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.122784712Z" level=info msg="shim disconnected" id=dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.122833279Z" level=warning msg="cleaning up after shim disconnected" id=dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.122842119Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.153972486Z" level=info msg="StopContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" returns successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.156265546Z" level=info msg="StopPodSandbox for \"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.156411974Z" level=info msg="Container to stop \"bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0\" must be in running or unknown state, current state \"CONTAINER_CREATED\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.156598790Z" level=info msg="Container to stop \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.208231957Z" level=info msg="shim disconnected" id=523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.208338377Z" level=warning msg="cleaning up after shim disconnected" id=523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.208351190Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.230602871Z" level=info msg="TearDown network for sandbox \"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0\" successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.230747079Z" level=info msg="StopPodSandbox for \"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0\" returns successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.373793669Z" level=info msg="RemoveContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.381845817Z" level=info msg="RemoveContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" returns successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.382697295Z" level=error msg="ContainerStatus for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\": not found"
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.081173534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-functional-400359,Uid:a075def9e32e694bce9f109a5666a324,Namespace:kube-system,Attempt:0,}"
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.139421216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.139950373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.140019518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.140188300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.617171843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-functional-400359,Uid:a075def9e32e694bce9f109a5666a324,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d9d51206fb22764283b3b6ff089269e466321b6246094f3810c55c50c4f0f08\""
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.626047387Z" level=info msg="CreateContainer within sandbox \"7d9d51206fb22764283b3b6ff089269e466321b6246094f3810c55c50c4f0f08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.681836981Z" level=info msg="CreateContainer within sandbox \"7d9d51206fb22764283b3b6ff089269e466321b6246094f3810c55c50c4f0f08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1787086a1918079953995cda98a8a3f069c2a2aaf5f1e187d78563422030fa96\""
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.682794269Z" level=info msg="StartContainer for \"1787086a1918079953995cda98a8a3f069c2a2aaf5f1e187d78563422030fa96\""
	Nov 08 23:45:17 functional-400359 containerd[2683]: time="2023-11-08T23:45:17.433064838Z" level=info msg="StartContainer for \"1787086a1918079953995cda98a8a3f069c2a2aaf5f1e187d78563422030fa96\" returns successfully"
	
	* 
	* ==> coredns [88c140ed6030d22284aaafb49382d15ef7da52d8beb9e058c36ea698c2910d04] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57342 - 44358 "HINFO IN 4361793349757605016.248109365602167116. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.135909373s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: unknown (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: unknown (get namespaces)
	
	* 
	* ==> coredns [e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51534 - 35900 "HINFO IN 2585345581505525764.4555830120890176857. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031001187s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-400359
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-400359
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e21c718ea4d79be9ab6c82476dffc8ce4079c94e
	                    minikube.k8s.io/name=functional-400359
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T23_43_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 23:43:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-400359
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 23:45:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 23:44:24 +0000   Wed, 08 Nov 2023 23:43:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 23:44:24 +0000   Wed, 08 Nov 2023 23:43:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 23:44:24 +0000   Wed, 08 Nov 2023 23:43:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 23:44:24 +0000   Wed, 08 Nov 2023 23:43:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    functional-400359
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa12a6704bf34e1d83876a2eb3b11647
	  System UUID:                fa12a670-4bf3-4e1d-8387-6a2eb3b11647
	  Boot ID:                    c3964329-2948-4c91-b6ae-11ab6cdcadb1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.8
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-tqvtr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     101s
	  kube-system                 etcd-functional-400359                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         113s
	  kube-system                 kube-apiserver-functional-400359             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-controller-manager-functional-400359    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-proxy-wv6f7                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-scheduler-functional-400359             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  Starting                 67s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node functional-400359 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node functional-400359 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node functional-400359 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                113s                 kubelet          Node functional-400359 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node functional-400359 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node functional-400359 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node functional-400359 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  113s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           101s                 node-controller  Node functional-400359 event: Registered Node functional-400359 in Controller
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node functional-400359 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node functional-400359 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x7 over 59s)    kubelet          Node functional-400359 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           43s                  node-controller  Node functional-400359 event: Registered Node functional-400359 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.156846] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.062315] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.304325] systemd-fstab-generator[561]: Ignoring "noauto" for root device
	[  +0.112180] systemd-fstab-generator[572]: Ignoring "noauto" for root device
	[  +0.151842] systemd-fstab-generator[585]: Ignoring "noauto" for root device
	[  +0.124353] systemd-fstab-generator[596]: Ignoring "noauto" for root device
	[  +0.268439] systemd-fstab-generator[623]: Ignoring "noauto" for root device
	[  +6.156386] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[Nov 8 23:43] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +9.282190] systemd-fstab-generator[1362]: Ignoring "noauto" for root device
	[ +18.264010] systemd-fstab-generator[2015]: Ignoring "noauto" for root device
	[  +0.177052] systemd-fstab-generator[2026]: Ignoring "noauto" for root device
	[  +0.171180] systemd-fstab-generator[2039]: Ignoring "noauto" for root device
	[  +0.169893] systemd-fstab-generator[2050]: Ignoring "noauto" for root device
	[  +0.296549] systemd-fstab-generator[2076]: Ignoring "noauto" for root device
	[Nov 8 23:44] systemd-fstab-generator[2615]: Ignoring "noauto" for root device
	[  +0.147087] systemd-fstab-generator[2626]: Ignoring "noauto" for root device
	[  +0.171247] systemd-fstab-generator[2639]: Ignoring "noauto" for root device
	[  +0.165487] systemd-fstab-generator[2650]: Ignoring "noauto" for root device
	[  +0.295897] systemd-fstab-generator[2676]: Ignoring "noauto" for root device
	[ +19.128891] systemd-fstab-generator[3485]: Ignoring "noauto" for root device
	[ +15.032820] kauditd_printk_skb: 23 callbacks suppressed
	
	* 
	* ==> etcd [1d784d6322fa72bf1ea8c9873171f75a644fcdac3d60a60b7253cea2aad58484] <==
	* {"level":"info","ts":"2023-11-08T23:44:10.907861Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-08T23:44:10.907973Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-11-08T23:44:10.908286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a switched to configuration voters=(8048648980531676538)"}
	{"level":"info","ts":"2023-11-08T23:44:10.908344Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","added-peer-id":"6fb28b9aae66857a","added-peer-peer-urls":["https://192.168.39.189:2380"]}
	{"level":"info","ts":"2023-11-08T23:44:10.908546Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:44:10.908577Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:44:10.919242Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:10.919299Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:10.919177Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-08T23:44:10.920701Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-08T23:44:10.920863Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6fb28b9aae66857a","initial-advertise-peer-urls":["https://192.168.39.189:2380"],"listen-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.189:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-08T23:44:12.571328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a received MsgPreVoteResp from 6fb28b9aae66857a at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became candidate at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.571747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a received MsgVoteResp from 6fb28b9aae66857a at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.571885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became leader at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.572003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6fb28b9aae66857a elected leader 6fb28b9aae66857a at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.574123Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6fb28b9aae66857a","local-member-attributes":"{Name:functional-400359 ClientURLs:[https://192.168.39.189:2379]}","request-path":"/0/members/6fb28b9aae66857a/attributes","cluster-id":"f0bdb053fd9e03ec","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-08T23:44:12.574193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:44:12.575568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T23:44:12.575581Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T23:44:12.57599Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T23:44:12.576127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:44:12.580777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.189:2379"}
	
	* 
	* ==> etcd [46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573] <==
	* {"level":"info","ts":"2023-11-08T23:43:21.2639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:43:21.265203Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.189:2379"}
	{"level":"info","ts":"2023-11-08T23:43:21.264037Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.264104Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:43:21.268038Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T23:43:21.273657Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T23:43:21.27674Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T23:43:21.306896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.332049Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.332311Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:43.658034Z","caller":"traceutil/trace.go:171","msg":"trace[1655151050] linearizableReadLoop","detail":"{readStateIndex:436; appliedIndex:435; }","duration":"158.288056ms","start":"2023-11-08T23:43:43.499691Z","end":"2023-11-08T23:43:43.657979Z","steps":["trace[1655151050] 'read index received'  (duration: 158.050466ms)","trace[1655151050] 'applied index is now lower than readState.Index'  (duration: 237.256µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-08T23:43:43.658216Z","caller":"traceutil/trace.go:171","msg":"trace[1004018470] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"165.867105ms","start":"2023-11-08T23:43:43.492343Z","end":"2023-11-08T23:43:43.65821Z","steps":["trace[1004018470] 'process raft request'  (duration: 165.460392ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-08T23:43:43.659133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.382515ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2023-11-08T23:43:43.659215Z","caller":"traceutil/trace.go:171","msg":"trace[1204654578] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:419; }","duration":"159.531169ms","start":"2023-11-08T23:43:43.499663Z","end":"2023-11-08T23:43:43.659194Z","steps":["trace[1204654578] 'agreement among raft nodes before linearized reading'  (duration: 158.722284ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T23:43:49.836017Z","caller":"traceutil/trace.go:171","msg":"trace[1640228342] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"142.995238ms","start":"2023-11-08T23:43:49.693Z","end":"2023-11-08T23:43:49.835995Z","steps":["trace[1640228342] 'process raft request'  (duration: 142.737466ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T23:44:09.257705Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-11-08T23:44:09.257894Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-400359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	{"level":"warn","ts":"2023-11-08T23:44:09.258128Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.258264Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.273807Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.274055Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"info","ts":"2023-11-08T23:44:09.274266Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6fb28b9aae66857a","current-leader-member-id":"6fb28b9aae66857a"}
	{"level":"info","ts":"2023-11-08T23:44:09.277371Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:09.277689Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:09.277704Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-400359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	
	* 
	* ==> kernel <==
	*  23:45:20 up 2 min,  0 users,  load average: 1.63, 0.82, 0.32
	Linux functional-400359 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [1787086a1918079953995cda98a8a3f069c2a2aaf5f1e187d78563422030fa96] <==
	* I1108 23:45:19.728666       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1108 23:45:19.728682       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1108 23:45:19.991427       1 controller.go:134] Starting OpenAPI controller
	I1108 23:45:19.991550       1 controller.go:85] Starting OpenAPI V3 controller
	I1108 23:45:19.991571       1 naming_controller.go:291] Starting NamingConditionController
	I1108 23:45:19.991585       1 establishing_controller.go:76] Starting EstablishingController
	I1108 23:45:19.991599       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1108 23:45:19.991610       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1108 23:45:19.991620       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1108 23:45:19.991965       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1108 23:45:19.992118       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 23:45:20.056846       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 23:45:20.109545       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1108 23:45:20.114588       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 23:45:20.114677       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 23:45:20.115322       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 23:45:20.115335       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 23:45:20.115789       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 23:45:20.115916       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 23:45:20.130696       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 23:45:20.130728       1 aggregator.go:166] initial CRD sync complete...
	I1108 23:45:20.130733       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 23:45:20.130738       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 23:45:20.130743       1 cache.go:39] Caches are synced for autoregister controller
	I1108 23:45:20.735243       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	
	* 
	* ==> kube-apiserver [bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0] <==
	* 
	* ==> kube-controller-manager [2faf0584a90c98fa3ae503339949f6fdc901e881c318c3b0b4ca3323123ba1a0] <==
	* I1108 23:44:10.838065       1 serving.go:348] Generated self-signed cert in-memory
	I1108 23:44:11.452649       1 controllermanager.go:189] "Starting" version="v1.28.3"
	I1108 23:44:11.452696       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 23:44:11.454751       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1108 23:44:11.455029       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 23:44:11.455309       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 23:44:11.455704       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1108 23:44:11.475414       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I1108 23:44:11.576258       1 shared_informer.go:318] Caches are synced for tokens
	I1108 23:44:12.801347       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1108 23:44:12.802296       1 cleaner.go:83] "Starting CSR cleaner controller"
	I1108 23:44:12.899559       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I1108 23:44:12.899798       1 namespace_controller.go:197] "Starting namespace controller"
	I1108 23:44:12.900091       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I1108 23:44:12.926665       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I1108 23:44:12.927319       1 stateful_set.go:161] "Starting stateful set controller"
	I1108 23:44:12.927524       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I1108 23:44:12.935324       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1108 23:44:12.935710       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I1108 23:44:12.936165       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	F1108 23:44:12.956649       1 client_builder_dynamic.go:174] Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.39.189:8441: connect: connection refused
	
	* 
	* ==> kube-controller-manager [7921f51c4026fd4eadeac9dbccfa803fc415bc1ed99e900bd95f598a614d8315] <==
	* E1108 23:45:19.933144       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E1108 23:45:19.933165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Secret: unknown (get secrets)
	E1108 23:45:19.933183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodTemplate: unknown (get podtemplates)
	E1108 23:45:19.933207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PriorityClass: unknown (get priorityclasses.scheduling.k8s.io)
	E1108 23:45:19.933225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.NetworkPolicy: unknown (get networkpolicies.networking.k8s.io)
	E1108 23:45:19.933239       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.LimitRange: unknown (get limitranges)
	E1108 23:45:19.933254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ControllerRevision: unknown (get controllerrevisions.apps)
	E1108 23:45:19.933311       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RoleBinding: unknown (get rolebindings.rbac.authorization.k8s.io)
	E1108 23:45:19.933333       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: unknown
	E1108 23:45:19.933351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E1108 23:45:19.933368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: unknown (get runtimeclasses.node.k8s.io)
	E1108 23:45:19.933385       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services)
	E1108 23:45:19.933400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ClusterRoleBinding: unknown (get clusterrolebindings.rbac.authorization.k8s.io)
	E1108 23:45:19.933415       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.IngressClass: unknown (get ingressclasses.networking.k8s.io)
	E1108 23:45:19.933429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Deployment: unknown (get deployments.apps)
	E1108 23:45:19.933518       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E1108 23:45:19.933556       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E1108 23:45:19.933569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)
	E1108 23:45:19.933579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ResourceQuota: unknown (get resourcequotas)
	E1108 23:45:19.933589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces)
	E1108 23:45:19.933599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E1108 23:45:19.946914       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E1108 23:45:19.947036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CronJob: unknown (get cronjobs.batch)
	E1108 23:45:19.947097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.VolumeAttachment: unknown (get volumeattachments.storage.k8s.io)
	E1108 23:45:20.031830       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io)
	
	* 
	* ==> kube-proxy [998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1] <==
	* I1108 23:43:40.754980       1 server_others.go:69] "Using iptables proxy"
	I1108 23:43:40.769210       1 node.go:141] Successfully retrieved node IP: 192.168.39.189
	I1108 23:43:40.838060       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 23:43:40.838106       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 23:43:40.841931       1 server_others.go:152] "Using iptables Proxier"
	I1108 23:43:40.842026       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 23:43:40.842300       1 server.go:846] "Version info" version="v1.28.3"
	I1108 23:43:40.842337       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 23:43:40.843102       1 config.go:188] "Starting service config controller"
	I1108 23:43:40.843156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 23:43:40.843175       1 config.go:97] "Starting endpoint slice config controller"
	I1108 23:43:40.843178       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 23:43:40.843838       1 config.go:315] "Starting node config controller"
	I1108 23:43:40.843878       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 23:43:40.943579       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 23:43:40.943667       1 shared_informer.go:318] Caches are synced for service config
	I1108 23:43:40.943937       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [fb3df666c8263c19fd9a028191dcb6e116547d67a9bf7f535ab103998f60679d] <==
	* I1108 23:44:13.012381       1 shared_informer.go:311] Waiting for caches to sync for node config
	W1108 23:44:13.012621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.012810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.013169       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.189:8441: connect: connection refused' (may retry after sleeping)
	W1108 23:44:13.815291       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.815363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:13.950038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.950102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:14.326340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:14.326643       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:15.820268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:15.820340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:16.787304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:16.787347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:17.093198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:17.093270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:19.899967       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:19.900010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:20.381161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:20.381245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:24.387034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	E1108 23:44:24.387290       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	I1108 23:44:29.107551       1 shared_informer.go:318] Caches are synced for service config
	I1108 23:44:29.513134       1 shared_informer.go:318] Caches are synced for node config
	I1108 23:44:35.808555       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a06cdad021ec7e1e28779a525beede6288ae5f847a64e005969e95c7cf80f00a] <==
	* I1108 23:44:12.864532       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 23:44:12.864566       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 23:44:12.864879       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 23:44:12.961705       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1108 23:44:12.965186       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 23:44:12.965350       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1108 23:44:24.314857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E1108 23:44:24.314957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E1108 23:44:24.319832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E1108 23:44:24.320160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods)
	E1108 23:44:24.320904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E1108 23:44:24.321298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services)
	E1108 23:44:24.321419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E1108 23:44:24.322244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E1108 23:44:24.322300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces)
	E1108 23:44:24.322320       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E1108 23:44:24.324606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E1108 23:44:24.328639       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes)
	E1108 23:44:24.328706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E1108 23:44:24.328951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E1108 23:44:24.401809       1 reflector.go:147] pkg/authentication/request/headerrequest/requestheader_controller.go:172: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E1108 23:45:19.867616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E1108 23:45:19.867896       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes)
	E1108 23:45:19.867929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E1108 23:45:19.897687       1 reflector.go:147] pkg/authentication/request/headerrequest/requestheader_controller.go:172: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> kube-scheduler [daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9] <==
	* E1108 23:43:23.555057       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 23:43:23.555310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 23:43:23.555637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 23:43:24.357554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 23:43:24.357652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 23:43:24.363070       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 23:43:24.363147       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 23:43:24.439814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 23:43:24.439863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 23:43:24.511419       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 23:43:24.511725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 23:43:24.521064       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1108 23:43:24.521357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1108 23:43:24.636054       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 23:43:24.636113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1108 23:43:24.742651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 23:43:24.742701       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 23:43:24.766583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 23:43:24.766665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 23:43:24.821852       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 23:43:24.821977       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1108 23:43:26.911793       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 23:44:09.072908       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1108 23:44:09.073170       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1108 23:44:09.073383       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 23:42:35 UTC, ends at Wed 2023-11-08 23:45:21 UTC. --
	Nov 08 23:45:13 functional-400359 kubelet[3491]: I1108 23:45:13.073105    3491 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="782fbbe1f7d627cd92711fb14a0b0813" path="/var/lib/kubelet/pods/782fbbe1f7d627cd92711fb14a0b0813/volumes"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.587992    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.588871    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.589147    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.589369    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.589681    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.589698    3491 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Nov 08 23:45:16 functional-400359 kubelet[3491]: I1108 23:45:16.071801    3491 status_manager.go:853] "Failed to get status for pod" podUID="926dd51d8b9a510a42b3d2d730469c12" pod="kube-system/kube-controller-manager-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:16 functional-400359 kubelet[3491]: I1108 23:45:16.072135    3491 status_manager.go:853] "Failed to get status for pod" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:16 functional-400359 kubelet[3491]: I1108 23:45:16.077939    3491 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-400359" podUID="9b2db385-150c-4599-b59e-165208edd076"
	Nov 08 23:45:16 functional-400359 kubelet[3491]: E1108 23:45:16.078880    3491 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-400359"
	Nov 08 23:45:17 functional-400359 kubelet[3491]: I1108 23:45:17.072141    3491 scope.go:117] "RemoveContainer" containerID="824ed4a51071156e47d1202f5d0c470369342d44f391048bf2efb68837cdac0d"
	Nov 08 23:45:17 functional-400359 kubelet[3491]: I1108 23:45:17.074271    3491 status_manager.go:853] "Failed to get status for pod" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:17 functional-400359 kubelet[3491]: I1108 23:45:17.074836    3491 status_manager.go:853] "Failed to get status for pod" podUID="926dd51d8b9a510a42b3d2d730469c12" pod="kube-system/kube-controller-manager-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:17 functional-400359 kubelet[3491]: E1108 23:45:17.079703    3491 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(01aed977-1439-433c-b8b1-869c92fcd9e2)\"" pod="kube-system/storage-provisioner" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2"
	Nov 08 23:45:18 functional-400359 kubelet[3491]: I1108 23:45:18.402550    3491 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-400359" podUID="9b2db385-150c-4599-b59e-165208edd076"
	Nov 08 23:45:20 functional-400359 kubelet[3491]: E1108 23:45:20.009031    3491 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Nov 08 23:45:20 functional-400359 kubelet[3491]: E1108 23:45:20.009075    3491 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Nov 08 23:45:20 functional-400359 kubelet[3491]: I1108 23:45:20.162797    3491 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-400359"
	Nov 08 23:45:20 functional-400359 kubelet[3491]: I1108 23:45:20.407313    3491 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-400359" podUID="9b2db385-150c-4599-b59e-165208edd076"
	Nov 08 23:45:21 functional-400359 kubelet[3491]: E1108 23:45:21.103916    3491 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 23:45:21 functional-400359 kubelet[3491]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 23:45:21 functional-400359 kubelet[3491]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 23:45:21 functional-400359 kubelet[3491]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 23:45:21 functional-400359 kubelet[3491]: I1108 23:45:21.106137    3491 scope.go:117] "RemoveContainer" containerID="bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0"
	
	* 
	* ==> storage-provisioner [824ed4a51071156e47d1202f5d0c470369342d44f391048bf2efb68837cdac0d] <==
	* I1108 23:45:05.218771       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 23:45:05.220296       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 23:45:20.951834  214585 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-08T23:45:20Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2023-11-08T23:45:20Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: kube-apiserver [bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0]

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-400359 -n functional-400359
helpers_test.go:261: (dbg) Run:  kubectl --context functional-400359 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-apiserver-functional-400359
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-400359 describe pod kube-apiserver-functional-400359
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-400359 describe pod kube-apiserver-functional-400359: exit status 1 (87.250004ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-apiserver-functional-400359" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-400359 describe pod kube-apiserver-functional-400359: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (6.75s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (6.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-400359 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-400359 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (84.288355ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.189:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-400359 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.189:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.189:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.189:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.39.189:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-400359 -n functional-400359
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-400359 -n functional-400359: exit status 2 (438.464419ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 logs -n 25: (5.634081203s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| start   | -p functional-400359                                                     | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-400359 cache add                                              | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | minikube-local-cache-test:functional-400359                              |                   |         |         |                     |                     |
	| cache   | functional-400359 cache delete                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | minikube-local-cache-test:functional-400359                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	| ssh     | functional-400359 ssh sudo                                               | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-400359                                                        | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-400359 ssh                                                    | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-400359 cache reload                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	| ssh     | functional-400359 ssh                                                    | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-400359 kubectl --                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC | 08 Nov 23 23:43 UTC |
	|         | --context functional-400359                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-400359                                                     | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:43 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	| config  | functional-400359 config unset                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| license |                                                                          | minikube          | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	| config  | functional-400359 config get                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-400359 config set                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|         | cpus 2                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-400359 ssh sudo                                               | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC |                     |
	|         | systemctl is-active docker                                               |                   |         |         |                     |                     |
	| config  | functional-400359 config get                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-400359 config unset                                           | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC | 08 Nov 23 23:45 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-400359 config get                                             | functional-400359 | jenkins | v1.32.0 | 08 Nov 23 23:45 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 23:43:59
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 23:43:59.599157  213888 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:43:59.599412  213888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:43:59.599416  213888 out.go:309] Setting ErrFile to fd 2...
	I1108 23:43:59.599420  213888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:43:59.599606  213888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
	I1108 23:43:59.600217  213888 out.go:303] Setting JSON to false
	I1108 23:43:59.601119  213888 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":23194,"bootTime":1699463846,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 23:43:59.601189  213888 start.go:138] virtualization: kvm guest
	I1108 23:43:59.603447  213888 out.go:177] * [functional-400359] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 23:43:59.605356  213888 notify.go:220] Checking for updates...
	I1108 23:43:59.605376  213888 out.go:177]   - MINIKUBE_LOCATION=17586
	I1108 23:43:59.607074  213888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:43:59.608704  213888 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:43:59.610319  213888 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	I1108 23:43:59.611947  213888 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 23:43:59.613523  213888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 23:43:59.615400  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:43:59.615477  213888 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:43:59.615864  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:43:59.615909  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:43:59.631683  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I1108 23:43:59.632150  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:43:59.632691  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:43:59.632708  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:43:59.633075  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:43:59.633250  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.666922  213888 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 23:43:59.668639  213888 start.go:298] selected driver: kvm2
	I1108 23:43:59.668648  213888 start.go:902] validating driver "kvm2" against &{Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400
359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:43:59.668789  213888 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 23:43:59.669167  213888 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:43:59.669241  213888 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17586-201782/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 23:43:59.685241  213888 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 23:43:59.685958  213888 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 23:43:59.686030  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:43:59.686038  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:43:59.686047  213888 start_flags.go:323] config:
	{Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:43:59.686238  213888 iso.go:125] acquiring lock: {Name:mk33479b76ec6919fe69628bcf9e99f9786f49af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:43:59.688123  213888 out.go:177] * Starting control plane node functional-400359 in cluster functional-400359
	I1108 23:43:59.689492  213888 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:43:59.689531  213888 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4
	I1108 23:43:59.689548  213888 cache.go:56] Caching tarball of preloaded images
	I1108 23:43:59.689653  213888 preload.go:174] Found /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1108 23:43:59.689661  213888 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1108 23:43:59.689851  213888 profile.go:148] Saving config to /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/config.json ...
	I1108 23:43:59.690069  213888 start.go:365] acquiring machines lock for functional-400359: {Name:mkc58a906fd9c58de0776efcd0f08335945567ed Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1108 23:43:59.690115  213888 start.go:369] acquired machines lock for "functional-400359" in 32.532µs
	I1108 23:43:59.690130  213888 start.go:96] Skipping create...Using existing machine configuration
	I1108 23:43:59.690134  213888 fix.go:54] fixHost starting: 
	I1108 23:43:59.690432  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:43:59.690465  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:43:59.706016  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46451
	I1108 23:43:59.706457  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:43:59.706983  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:43:59.707003  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:43:59.707316  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:43:59.707534  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.707715  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:43:59.709629  213888 fix.go:102] recreateIfNeeded on functional-400359: state=Running err=<nil>
	W1108 23:43:59.709665  213888 fix.go:128] unexpected machine state, will restart: <nil>
	I1108 23:43:59.711868  213888 out.go:177] * Updating the running kvm2 "functional-400359" VM ...
	I1108 23:43:59.713307  213888 machine.go:88] provisioning docker machine ...
	I1108 23:43:59.713332  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:43:59.713637  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.713880  213888 buildroot.go:166] provisioning hostname "functional-400359"
	I1108 23:43:59.713899  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.714053  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:43:59.716647  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.717013  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:43:59.717073  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.717195  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:43:59.717406  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.717589  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.717824  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:43:59.718013  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:43:59.718360  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:43:59.718370  213888 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-400359 && echo "functional-400359" | sudo tee /etc/hostname
	I1108 23:43:59.863990  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-400359
	
	I1108 23:43:59.864012  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:43:59.866908  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.867252  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:43:59.867363  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:43:59.867442  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:43:59.867690  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.867850  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:43:59.867996  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:43:59.868145  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:43:59.868492  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:43:59.868503  213888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-400359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-400359/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-400359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 23:43:59.999382  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1108 23:43:59.999410  213888 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17586-201782/.minikube CaCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17586-201782/.minikube}
	I1108 23:43:59.999434  213888 buildroot.go:174] setting up certificates
	I1108 23:43:59.999445  213888 provision.go:83] configureAuth start
	I1108 23:43:59.999455  213888 main.go:141] libmachine: (functional-400359) Calling .GetMachineName
	I1108 23:43:59.999781  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:00.002662  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.002978  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.003014  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.003248  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.005651  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.006085  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.006106  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.006287  213888 provision.go:138] copyHostCerts
	I1108 23:44:00.006374  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem, removing ...
	I1108 23:44:00.006389  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem
	I1108 23:44:00.006451  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/ca.pem (1078 bytes)
	I1108 23:44:00.006581  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem, removing ...
	I1108 23:44:00.006587  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem
	I1108 23:44:00.006617  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/cert.pem (1123 bytes)
	I1108 23:44:00.006719  213888 exec_runner.go:144] found /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem, removing ...
	I1108 23:44:00.006724  213888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem
	I1108 23:44:00.006742  213888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17586-201782/.minikube/key.pem (1679 bytes)
	I1108 23:44:00.006784  213888 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem org=jenkins.functional-400359 san=[192.168.39.189 192.168.39.189 localhost 127.0.0.1 minikube functional-400359]
	I1108 23:44:00.203873  213888 provision.go:172] copyRemoteCerts
	I1108 23:44:00.203931  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 23:44:00.203956  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.206797  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.207094  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.207119  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.207305  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.207516  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.207692  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.207814  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.301445  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1108 23:44:00.331684  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1108 23:44:00.361187  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 23:44:00.388214  213888 provision.go:86] duration metric: configureAuth took 388.751766ms
	I1108 23:44:00.388241  213888 buildroot.go:189] setting minikube options for container-runtime
	I1108 23:44:00.388477  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:44:00.388484  213888 machine.go:91] provisioned docker machine in 675.168638ms
	I1108 23:44:00.388492  213888 start.go:300] post-start starting for "functional-400359" (driver="kvm2")
	I1108 23:44:00.388500  213888 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 23:44:00.388535  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.388924  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 23:44:00.388948  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.391561  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.391940  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.391967  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.392105  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.392316  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.392453  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.392611  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.488199  213888 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 23:44:00.492976  213888 info.go:137] Remote host: Buildroot 2021.02.12
	I1108 23:44:00.492992  213888 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-201782/.minikube/addons for local assets ...
	I1108 23:44:00.493051  213888 filesync.go:126] Scanning /home/jenkins/minikube-integration/17586-201782/.minikube/files for local assets ...
	I1108 23:44:00.493113  213888 filesync.go:149] local asset: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem -> 2089632.pem in /etc/ssl/certs
	I1108 23:44:00.493174  213888 filesync.go:149] local asset: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/test/nested/copy/208963/hosts -> hosts in /etc/test/nested/copy/208963
	I1108 23:44:00.493206  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/208963
	I1108 23:44:00.501656  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem --> /etc/ssl/certs/2089632.pem (1708 bytes)
	I1108 23:44:00.525422  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/test/nested/copy/208963/hosts --> /etc/test/nested/copy/208963/hosts (40 bytes)
	I1108 23:44:00.548996  213888 start.go:303] post-start completed in 160.490436ms
	I1108 23:44:00.549028  213888 fix.go:56] fixHost completed within 858.891713ms
	I1108 23:44:00.549103  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.551962  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.552311  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.552329  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.552563  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.552735  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.552911  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.553036  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.553160  213888 main.go:141] libmachine: Using SSH client type: native
	I1108 23:44:00.553504  213888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 192.168.39.189 22 <nil> <nil>}
	I1108 23:44:00.553510  213888 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1108 23:44:00.679007  213888 main.go:141] libmachine: SSH cmd err, output: <nil>: 1699487040.675193612
	
	I1108 23:44:00.679025  213888 fix.go:206] guest clock: 1699487040.675193612
	I1108 23:44:00.679031  213888 fix.go:219] Guest: 2023-11-08 23:44:00.675193612 +0000 UTC Remote: 2023-11-08 23:44:00.549031363 +0000 UTC m=+1.003889169 (delta=126.162249ms)
	I1108 23:44:00.679051  213888 fix.go:190] guest clock delta is within tolerance: 126.162249ms
	I1108 23:44:00.679055  213888 start.go:83] releasing machines lock for "functional-400359", held for 988.934098ms
	I1108 23:44:00.679080  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.679402  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:00.682635  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.683021  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.683048  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.683271  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.683917  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.684098  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:00.684213  213888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 23:44:00.684252  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.684416  213888 ssh_runner.go:195] Run: cat /version.json
	I1108 23:44:00.684440  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:00.687054  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687399  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.687426  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687449  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687587  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.687788  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.687907  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:00.687935  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:00.687948  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.688119  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:00.688118  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.688285  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:00.688448  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:00.688589  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:00.802586  213888 ssh_runner.go:195] Run: systemctl --version
	I1108 23:44:00.808787  213888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 23:44:00.814779  213888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 23:44:00.814850  213888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 23:44:00.824904  213888 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 23:44:00.824923  213888 start.go:472] detecting cgroup driver to use...
	I1108 23:44:00.824994  213888 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1108 23:44:00.839653  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1108 23:44:00.852631  213888 docker.go:203] disabling cri-docker service (if available) ...
	I1108 23:44:00.852687  213888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 23:44:00.865664  213888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 23:44:00.878442  213888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 23:44:01.013896  213888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 23:44:01.176298  213888 docker.go:219] disabling docker service ...
	I1108 23:44:01.176368  213888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 23:44:01.191617  213888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 23:44:01.205423  213888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 23:44:01.352320  213888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 23:44:01.505796  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 23:44:01.520373  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 23:44:01.539920  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1108 23:44:01.552198  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1108 23:44:01.564553  213888 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1108 23:44:01.564634  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1108 23:44:01.577530  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:44:01.589460  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1108 23:44:01.601621  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1108 23:44:01.615054  213888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 23:44:01.626891  213888 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1108 23:44:01.638637  213888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 23:44:01.649235  213888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 23:44:01.660480  213888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 23:44:01.793850  213888 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1108 23:44:01.824923  213888 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1108 23:44:01.824991  213888 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1108 23:44:01.831130  213888 retry.go:31] will retry after 821.206397ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1108 23:44:02.653187  213888 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1108 23:44:02.660143  213888 start.go:540] Will wait 60s for crictl version
	I1108 23:44:02.660193  213888 ssh_runner.go:195] Run: which crictl
	I1108 23:44:02.665280  213888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1108 23:44:02.711632  213888 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.8
	RuntimeApiVersion:  v1
	I1108 23:44:02.711708  213888 ssh_runner.go:195] Run: containerd --version
	I1108 23:44:02.742401  213888 ssh_runner.go:195] Run: containerd --version
	I1108 23:44:02.772662  213888 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.7.8 ...
	I1108 23:44:02.774143  213888 main.go:141] libmachine: (functional-400359) Calling .GetIP
	I1108 23:44:02.776902  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:02.777294  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:02.777321  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:02.777524  213888 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1108 23:44:02.784598  213888 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1108 23:44:02.786474  213888 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:44:02.786612  213888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:44:02.834765  213888 containerd.go:604] all images are preloaded for containerd runtime.
	I1108 23:44:02.834781  213888 containerd.go:518] Images already preloaded, skipping extraction
	I1108 23:44:02.834839  213888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 23:44:02.877779  213888 containerd.go:604] all images are preloaded for containerd runtime.
	I1108 23:44:02.877797  213888 cache_images.go:84] Images are preloaded, skipping loading
	I1108 23:44:02.877870  213888 ssh_runner.go:195] Run: sudo crictl info
	I1108 23:44:02.924597  213888 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1108 23:44:02.924626  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:44:02.924635  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:44:02.924644  213888 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1108 23:44:02.924661  213888 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.189 APIServerPort:8441 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-400359 NodeName:functional-400359 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubele
tConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 23:44:02.924813  213888 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.189
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-400359"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.189
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 23:44:02.924893  213888 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-400359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1108 23:44:02.924953  213888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1108 23:44:02.936489  213888 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 23:44:02.936562  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 23:44:02.947183  213888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I1108 23:44:02.966007  213888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 23:44:02.985587  213888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1962 bytes)
	I1108 23:44:03.005107  213888 ssh_runner.go:195] Run: grep 192.168.39.189	control-plane.minikube.internal$ /etc/hosts
	I1108 23:44:03.010099  213888 certs.go:56] Setting up /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359 for IP: 192.168.39.189
	I1108 23:44:03.010128  213888 certs.go:190] acquiring lock for shared ca certs: {Name:mk39cbc6402159d6a738802f6361f72eac5d34d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:03.010382  213888 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17586-201782/.minikube/ca.key
	I1108 23:44:03.010425  213888 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.key
	I1108 23:44:03.010497  213888 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.key
	I1108 23:44:03.010540  213888 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.key.3964182b
	I1108 23:44:03.010588  213888 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.key
	I1108 23:44:03.010739  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963.pem (1338 bytes)
	W1108 23:44:03.010780  213888 certs.go:433] ignoring /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963_empty.pem, impossibly tiny 0 bytes
	I1108 23:44:03.010790  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 23:44:03.010822  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/ca.pem (1078 bytes)
	I1108 23:44:03.010853  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/cert.pem (1123 bytes)
	I1108 23:44:03.010885  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/certs/home/jenkins/minikube-integration/17586-201782/.minikube/certs/key.pem (1679 bytes)
	I1108 23:44:03.010944  213888 certs.go:437] found cert: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem (1708 bytes)
	I1108 23:44:03.011800  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1108 23:44:03.052476  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1108 23:44:03.084167  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 23:44:03.113455  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 23:44:03.138855  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 23:44:03.170000  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1108 23:44:03.203207  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 23:44:03.233030  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 23:44:03.262431  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/certs/208963.pem --> /usr/share/ca-certificates/208963.pem (1338 bytes)
	I1108 23:44:03.288670  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/ssl/certs/2089632.pem --> /usr/share/ca-certificates/2089632.pem (1708 bytes)
	I1108 23:44:03.317344  213888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17586-201782/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 23:44:03.345150  213888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 23:44:03.367221  213888 ssh_runner.go:195] Run: openssl version
	I1108 23:44:03.373631  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2089632.pem && ln -fs /usr/share/ca-certificates/2089632.pem /etc/ssl/certs/2089632.pem"
	I1108 23:44:03.388662  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.394338  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  8 23:42 /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.394401  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2089632.pem
	I1108 23:44:03.400580  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2089632.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 23:44:03.412248  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 23:44:03.425515  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.430926  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  8 23:35 /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.430990  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 23:44:03.437443  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 23:44:03.447837  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/208963.pem && ln -fs /usr/share/ca-certificates/208963.pem /etc/ssl/certs/208963.pem"
	I1108 23:44:03.461453  213888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.467398  213888 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  8 23:42 /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.467478  213888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/208963.pem
	I1108 23:44:03.474228  213888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/208963.pem /etc/ssl/certs/51391683.0"
	I1108 23:44:03.487446  213888 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1108 23:44:03.492652  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 23:44:03.499552  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 23:44:03.507193  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 23:44:03.514236  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 23:44:03.521522  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 23:44:03.527708  213888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 23:44:03.534082  213888 kubeadm.go:404] StartCluster: {Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400359 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:44:03.534196  213888 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1108 23:44:03.534267  213888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 23:44:03.584679  213888 cri.go:89] found id: "db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f"
	I1108 23:44:03.584695  213888 cri.go:89] found id: "e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5"
	I1108 23:44:03.584698  213888 cri.go:89] found id: "998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1"
	I1108 23:44:03.584701  213888 cri.go:89] found id: "daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9"
	I1108 23:44:03.584704  213888 cri.go:89] found id: "b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2"
	I1108 23:44:03.584707  213888 cri.go:89] found id: "46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573"
	I1108 23:44:03.584709  213888 cri.go:89] found id: "a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086"
	I1108 23:44:03.584711  213888 cri.go:89] found id: ""
	I1108 23:44:03.584767  213888 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1108 23:44:03.616378  213888 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","pid":1604,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9/rootfs","created":"2023-11-08T23:43:40.318157335Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wv6f7_7ab3ac5b-5a0e-462b-a171-08f507184dfa","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wv6f7","io.kubernetes.cri.sand
box-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"7ab3ac5b-5a0e-462b-a171-08f507184dfa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","pid":1110,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1/rootfs","created":"2023-11-08T23:43:18.68773069Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-400359_faaa6dec7d9cbf75400a4930b93bdc7d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes
.cri.sandbox-name":"etcd-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"faaa6dec7d9cbf75400a4930b93bdc7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573","pid":1243,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573/rootfs","created":"2023-11-08T23:43:19.79473196Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri.sandbox-id":"127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1","io.kubernetes.cri.sandbox-name":"etcd-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fa
aa6dec7d9cbf75400a4930b93bdc7d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","pid":1137,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0/rootfs","created":"2023-11-08T23:43:18.759582Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-400359","io.ku
bernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"782fbbe1f7d627cd92711fb14a0b0813"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","pid":1799,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44/rootfs","created":"2023-11-08T23:43:41.584597939Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-5dd5756b68-tqvtr_b03be54f-57e6-4247-84ba-9545f9b1b4ed","io.kubernetes.cri.sandbox-memory
":"178257920","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-tqvtr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b03be54f-57e6-4247-84ba-9545f9b1b4ed"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1","pid":1633,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1/rootfs","created":"2023-11-08T23:43:40.529772065Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.28.3","io.kubernetes.cri.sandbox-id":"0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9","io.kubernetes.cri.sandbox-name":"kube-proxy-wv6f7","io.kubernetes.cri.sandbox-namespace":"kube-sy
stem","io.kubernetes.cri.sandbox-uid":"7ab3ac5b-5a0e-462b-a171-08f507184dfa"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","pid":1160,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b/rootfs","created":"2023-11-08T23:43:18.813882118Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-400359_af28ec4ee73fcf841ab21630a0a61078","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox
-name":"kube-scheduler-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"af28ec4ee73fcf841ab21630a0a61078"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","pid":1838,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186/rootfs","created":"2023-11-08T23:43:41.837718349Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_01aed977-1439-433c-b8b1-869c92
fcd9e2","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"01aed977-1439-433c-b8b1-869c92fcd9e2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086","pid":1198,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086/rootfs","created":"2023-11-08T23:43:19.509573182Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.28.3","io.kubernetes.cri.sandbox-id":"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-40
0359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"782fbbe1f7d627cd92711fb14a0b0813"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2","pid":1272,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2/rootfs","created":"2023-11-08T23:43:19.928879069Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.28.3","io.kubernetes.cri.sandbox-id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.
cri.sandbox-uid":"926dd51d8b9a510a42b3d2d730469c12"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","pid":1169,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef/rootfs","created":"2023-11-08T23:43:18.854841205Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-400359_926dd51d8b9a510a42b3d2d730469c12","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-con
troller-manager-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"926dd51d8b9a510a42b3d2d730469c12"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9","pid":1308,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9/rootfs","created":"2023-11-08T23:43:20.119265886Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.28.3","io.kubernetes.cri.sandbox-id":"9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-400359","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernete
s.cri.sandbox-uid":"af28ec4ee73fcf841ab21630a0a61078"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f","pid":1923,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f/rootfs","created":"2023-11-08T23:43:43.423326377Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"01aed977-1439-433c-b8b1-869c92fcd9e2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id
":"e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5","pid":1870,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5/rootfs","created":"2023-11-08T23:43:42.0245694Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri.sandbox-id":"8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44","io.kubernetes.cri.sandbox-name":"coredns-5dd5756b68-tqvtr","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b03be54f-57e6-4247-84ba-9545f9b1b4ed"},"owner":"root"}]
	I1108 23:44:03.616807  213888 cri.go:126] list returned 14 containers
	I1108 23:44:03.616824  213888 cri.go:129] container: {ID:0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9 Status:running}
	I1108 23:44:03.616850  213888 cri.go:131] skipping 0d0883976452b75f1ab64aa123dfc56c913a436e158ad9af2d955ecda324b9a9 - not in ps
	I1108 23:44:03.616857  213888 cri.go:129] container: {ID:127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1 Status:running}
	I1108 23:44:03.616865  213888 cri.go:131] skipping 127436741085245ab94912e80b9f8c289209ce617b398a4f4dd681d9b28bd0e1 - not in ps
	I1108 23:44:03.616871  213888 cri.go:129] container: {ID:46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 Status:running}
	I1108 23:44:03.616879  213888 cri.go:135] skipping {46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 running}: state = "running", want "paused"
	I1108 23:44:03.616892  213888 cri.go:129] container: {ID:523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 Status:running}
	I1108 23:44:03.616900  213888 cri.go:131] skipping 523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 - not in ps
	I1108 23:44:03.616906  213888 cri.go:129] container: {ID:8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44 Status:running}
	I1108 23:44:03.616913  213888 cri.go:131] skipping 8005a17990fd0a317ebcb5bd053a2c861d75cd7e32f968573e4e0f6babba3c44 - not in ps
	I1108 23:44:03.616919  213888 cri.go:129] container: {ID:998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 Status:running}
	I1108 23:44:03.616927  213888 cri.go:135] skipping {998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 running}: state = "running", want "paused"
	I1108 23:44:03.616934  213888 cri.go:129] container: {ID:9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b Status:running}
	I1108 23:44:03.616941  213888 cri.go:131] skipping 9bb1405590c60c563f46738683cb01b19e778367c10fd9613789b03e237f732b - not in ps
	I1108 23:44:03.616947  213888 cri.go:129] container: {ID:9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186 Status:running}
	I1108 23:44:03.616954  213888 cri.go:131] skipping 9c7477be159572ccfcd12cbae317482ff324bcf61cb9e5e85a54196a4f045186 - not in ps
	I1108 23:44:03.616959  213888 cri.go:129] container: {ID:a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086 Status:running}
	I1108 23:44:03.616963  213888 cri.go:135] skipping {a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086 running}: state = "running", want "paused"
	I1108 23:44:03.616967  213888 cri.go:129] container: {ID:b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 Status:running}
	I1108 23:44:03.616973  213888 cri.go:135] skipping {b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 running}: state = "running", want "paused"
	I1108 23:44:03.616980  213888 cri.go:129] container: {ID:ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef Status:running}
	I1108 23:44:03.616988  213888 cri.go:131] skipping ca712d9c0441aff1298c087b96df534db5fe27201143325303ef19a9011b40ef - not in ps
	I1108 23:44:03.616993  213888 cri.go:129] container: {ID:daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 Status:running}
	I1108 23:44:03.617001  213888 cri.go:135] skipping {daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 running}: state = "running", want "paused"
	I1108 23:44:03.617019  213888 cri.go:129] container: {ID:db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f Status:running}
	I1108 23:44:03.617027  213888 cri.go:135] skipping {db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f running}: state = "running", want "paused"
	I1108 23:44:03.617034  213888 cri.go:129] container: {ID:e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 Status:running}
	I1108 23:44:03.617041  213888 cri.go:135] skipping {e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 running}: state = "running", want "paused"
	I1108 23:44:03.617112  213888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 23:44:03.629140  213888 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1108 23:44:03.629156  213888 kubeadm.go:636] restartCluster start
	I1108 23:44:03.629300  213888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 23:44:03.640035  213888 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:03.640634  213888 kubeconfig.go:92] found "functional-400359" server: "https://192.168.39.189:8441"
	I1108 23:44:03.641989  213888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 23:44:03.652731  213888 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.189"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1108 23:44:03.652746  213888 kubeadm.go:1128] stopping kube-system containers ...
	I1108 23:44:03.652762  213888 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1108 23:44:03.652812  213888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 23:44:03.699235  213888 cri.go:89] found id: "db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f"
	I1108 23:44:03.699249  213888 cri.go:89] found id: "e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5"
	I1108 23:44:03.699251  213888 cri.go:89] found id: "998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1"
	I1108 23:44:03.699255  213888 cri.go:89] found id: "daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9"
	I1108 23:44:03.699260  213888 cri.go:89] found id: "b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2"
	I1108 23:44:03.699263  213888 cri.go:89] found id: "46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573"
	I1108 23:44:03.699265  213888 cri.go:89] found id: "a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086"
	I1108 23:44:03.699268  213888 cri.go:89] found id: ""
	I1108 23:44:03.699272  213888 cri.go:234] Stopping containers: [db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086]
	I1108 23:44:03.699323  213888 ssh_runner.go:195] Run: which crictl
	I1108 23:44:03.703856  213888 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086
	I1108 23:44:19.459008  213888 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 db750d3b7aa6664b0c6eadc3b3bc99e8ecc97130d8e1f80fe7f384be107f630f e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5 998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1 daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9 b7b06d9b85df7ed7b5a7fb3bc570deb06bdd1e7aa18ddb77481985d565b81af2 46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573 a12443887300f2bd2875038156b612cfb9acc65f9ae3c8c952ff29ea0fda9086: (15.75506263s)
	I1108 23:44:19.459080  213888 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1108 23:44:19.504154  213888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 23:44:19.515266  213888 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  8 23:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Nov  8 23:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Nov  8 23:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Nov  8 23:43 /etc/kubernetes/scheduler.conf
	
	I1108 23:44:19.515346  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1108 23:44:19.524771  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1108 23:44:19.534582  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1108 23:44:19.544348  213888 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:19.544402  213888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 23:44:19.553487  213888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1108 23:44:19.562898  213888 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1108 23:44:19.562943  213888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 23:44:19.572855  213888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 23:44:19.583092  213888 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1108 23:44:19.583112  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:19.656656  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:20.718251  213888 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.061543708s)
	I1108 23:44:20.718274  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:20.940824  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:21.049550  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:21.155180  213888 api_server.go:52] waiting for apiserver process to appear ...
	I1108 23:44:21.155262  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:21.170827  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:21.687533  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:22.187100  213888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:44:22.201909  213888 api_server.go:72] duration metric: took 1.046727455s to wait for apiserver process to appear ...
	I1108 23:44:22.201930  213888 api_server.go:88] waiting for apiserver healthz status ...
	I1108 23:44:22.201951  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:22.202592  213888 api_server.go:269] stopped: https://192.168.39.189:8441/healthz: Get "https://192.168.39.189:8441/healthz": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:22.202621  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:22.203025  213888 api_server.go:269] stopped: https://192.168.39.189:8441/healthz: Get "https://192.168.39.189:8441/healthz": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:22.703898  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.321821  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 23:44:24.321848  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 23:44:24.321866  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.331452  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 23:44:24.331472  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 23:44:24.703560  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:24.710858  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 23:44:24.710888  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 23:44:25.203966  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:25.210943  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 23:44:25.210976  213888 api_server.go:103] status: https://192.168.39.189:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 23:44:25.703512  213888 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8441/healthz ...
	I1108 23:44:25.709194  213888 api_server.go:279] https://192.168.39.189:8441/healthz returned 200:
	ok
	I1108 23:44:25.717645  213888 api_server.go:141] control plane version: v1.28.3
	I1108 23:44:25.717670  213888 api_server.go:131] duration metric: took 3.515732599s to wait for apiserver health ...
	I1108 23:44:25.717682  213888 cni.go:84] Creating CNI manager for ""
	I1108 23:44:25.717690  213888 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:44:25.719887  213888 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1108 23:44:25.721531  213888 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1108 23:44:25.734492  213888 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1108 23:44:25.771439  213888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 23:44:25.784433  213888 system_pods.go:59] 7 kube-system pods found
	I1108 23:44:25.784465  213888 system_pods.go:61] "coredns-5dd5756b68-tqvtr" [b03be54f-57e6-4247-84ba-9545f9b1b4ed] Running
	I1108 23:44:25.784475  213888 system_pods.go:61] "etcd-functional-400359" [70bdf2a8-b999-4d46-baf3-0c9267d9d3ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 23:44:25.784489  213888 system_pods.go:61] "kube-apiserver-functional-400359" [9b2db385-150c-4599-b59e-165208edd076] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 23:44:25.784498  213888 system_pods.go:61] "kube-controller-manager-functional-400359" [e2f2bb0b-f018-4ada-bd5d-d225b097763b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 23:44:25.784504  213888 system_pods.go:61] "kube-proxy-wv6f7" [7ab3ac5b-5a0e-462b-a171-08f507184dfa] Running
	I1108 23:44:25.784511  213888 system_pods.go:61] "kube-scheduler-functional-400359" [0156fad8-02e5-40ae-a5d1-17824d5c238b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 23:44:25.784521  213888 system_pods.go:61] "storage-provisioner" [01aed977-1439-433c-b8b1-869c92fcd9e2] Running
	I1108 23:44:25.784531  213888 system_pods.go:74] duration metric: took 13.073006ms to wait for pod list to return data ...
	I1108 23:44:25.784539  213888 node_conditions.go:102] verifying NodePressure condition ...
	I1108 23:44:25.793569  213888 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1108 23:44:25.793597  213888 node_conditions.go:123] node cpu capacity is 2
	I1108 23:44:25.793611  213888 node_conditions.go:105] duration metric: took 9.06541ms to run NodePressure ...
	I1108 23:44:25.793633  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1108 23:44:26.114141  213888 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1108 23:44:26.120712  213888 kubeadm.go:787] kubelet initialised
	I1108 23:44:26.120723  213888 kubeadm.go:788] duration metric: took 6.565858ms waiting for restarted kubelet to initialise ...
	I1108 23:44:26.120731  213888 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:44:26.131331  213888 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:26.138144  213888 pod_ready.go:92] pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:26.138155  213888 pod_ready.go:81] duration metric: took 6.806304ms waiting for pod "coredns-5dd5756b68-tqvtr" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:26.138164  213888 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:28.164811  213888 pod_ready.go:102] pod "etcd-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:30.665514  213888 pod_ready.go:92] pod "etcd-functional-400359" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:30.665553  213888 pod_ready.go:81] duration metric: took 4.527359591s waiting for pod "etcd-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:30.665565  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:32.689403  213888 pod_ready.go:102] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:34.690254  213888 pod_ready.go:102] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:35.686775  213888 pod_ready.go:92] pod "kube-apiserver-functional-400359" in "kube-system" namespace has status "Ready":"True"
	I1108 23:44:35.686791  213888 pod_ready.go:81] duration metric: took 5.021218707s waiting for pod "kube-apiserver-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:35.686800  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:37.708359  213888 pod_ready.go:102] pod "kube-controller-manager-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:40.208162  213888 pod_ready.go:102] pod "kube-controller-manager-functional-400359" in "kube-system" namespace has status "Ready":"False"
	I1108 23:44:41.201149  213888 pod_ready.go:97] error getting pod "kube-controller-manager-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201165  213888 pod_ready.go:81] duration metric: took 5.514358749s waiting for pod "kube-controller-manager-functional-400359" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.201176  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201204  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wv6f7" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:41.201819  213888 pod_ready.go:97] error getting pod "kube-proxy-wv6f7" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wv6f7": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201831  213888 pod_ready.go:81] duration metric: took 621.035µs waiting for pod "kube-proxy-wv6f7" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.201841  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-wv6f7" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-proxy-wv6f7": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.201857  213888 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-400359" in "kube-system" namespace to be "Ready" ...
	I1108 23:44:41.202340  213888 pod_ready.go:97] error getting pod "kube-scheduler-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.202352  213888 pod_ready.go:81] duration metric: took 489.317µs waiting for pod "kube-scheduler-functional-400359" in "kube-system" namespace to be "Ready" ...
	E1108 23:44:41.202362  213888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-400359" in "kube-system" namespace (skipping!): Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.202373  213888 pod_ready.go:38] duration metric: took 15.08163132s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1108 23:44:41.202390  213888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 23:44:41.213978  213888 ops.go:34] apiserver oom_adj: -16
	I1108 23:44:41.213994  213888 kubeadm.go:640] restartCluster took 37.584832416s
	I1108 23:44:41.214002  213888 kubeadm.go:406] StartCluster complete in 37.679936432s
	I1108 23:44:41.214034  213888 settings.go:142] acquiring lock: {Name:mkb2acb83ccee48e6a009b8a47bf5424e6c38acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:41.214142  213888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:44:41.215036  213888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17586-201782/kubeconfig: {Name:mk9c6e9f67ac12aac98932c0b45c3a0608805854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 23:44:41.215314  213888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 23:44:41.215404  213888 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1108 23:44:41.215479  213888 addons.go:69] Setting storage-provisioner=true in profile "functional-400359"
	I1108 23:44:41.215505  213888 addons.go:69] Setting default-storageclass=true in profile "functional-400359"
	I1108 23:44:41.215525  213888 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-400359"
	I1108 23:44:41.215526  213888 addons.go:231] Setting addon storage-provisioner=true in "functional-400359"
	W1108 23:44:41.215533  213888 addons.go:240] addon storage-provisioner should already be in state true
	I1108 23:44:41.215537  213888 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:44:41.215605  213888 host.go:66] Checking if "functional-400359" exists ...
	I1108 23:44:41.215913  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.215951  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.216018  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.216055  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	W1108 23:44:41.216959  213888 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "functional-400359" context to 1 replicas: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.39.189:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:41.216977  213888 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while getting "coredns" deployment scale: Get "https://192.168.39.189:8441/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.217012  213888 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1108 23:44:41.220368  213888 out.go:177] * Verifying Kubernetes components...
	I1108 23:44:41.222004  213888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:44:41.231875  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I1108 23:44:41.232530  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.233190  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.233218  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.233719  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.234280  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.234325  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.237697  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38383
	I1108 23:44:41.238255  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.238752  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.238768  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.239192  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.239445  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.244598  213888 addons.go:231] Setting addon default-storageclass=true in "functional-400359"
	W1108 23:44:41.244614  213888 addons.go:240] addon default-storageclass should already be in state true
	I1108 23:44:41.244642  213888 host.go:66] Checking if "functional-400359" exists ...
	I1108 23:44:41.245132  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.245164  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.252037  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46087
	I1108 23:44:41.252498  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.253020  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.253051  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.253456  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.253670  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.255485  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:41.257960  213888 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 23:44:41.259863  213888 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 23:44:41.259875  213888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 23:44:41.259896  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:41.261665  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I1108 23:44:41.262263  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.262840  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.262867  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.263263  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.263662  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.263878  213888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:44:41.263916  213888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:44:41.264121  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:41.264156  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.264394  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:41.264629  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:41.264831  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:41.265036  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:41.280509  213888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I1108 23:44:41.281054  213888 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:44:41.281632  213888 main.go:141] libmachine: Using API Version  1
	I1108 23:44:41.281643  213888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:44:41.282046  213888 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:44:41.282278  213888 main.go:141] libmachine: (functional-400359) Calling .GetState
	I1108 23:44:41.284072  213888 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:44:41.284406  213888 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 23:44:41.284420  213888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 23:44:41.284442  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
	I1108 23:44:41.287607  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.288057  213888 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
	I1108 23:44:41.288091  213888 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
	I1108 23:44:41.288286  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
	I1108 23:44:41.288503  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
	I1108 23:44:41.288686  213888 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
	I1108 23:44:41.288836  213888 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
	I1108 23:44:41.340989  213888 node_ready.go:35] waiting up to 6m0s for node "functional-400359" to be "Ready" ...
	E1108 23:44:41.341045  213888 start.go:891] failed to get current CoreDNS ConfigMap: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1108 23:44:41.341073  213888 start.go:294] Unable to inject {"host.minikube.internal": 192.168.39.1} record into CoreDNS: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	W1108 23:44:41.341104  213888 out.go:239] Failed to inject host.minikube.internal into CoreDNS, this will limit the pods access to the host IP
	I1108 23:44:41.341639  213888 node_ready.go:53] error getting node "functional-400359": Get "https://192.168.39.189:8441/api/v1/nodes/functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	I1108 23:44:41.341651  213888 node_ready.go:38] duration metric: took 637.211µs waiting for node "functional-400359" to be "Ready" ...
	I1108 23:44:41.344408  213888 out.go:177] 
	W1108 23:44:41.345988  213888 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: error getting node "functional-400359": Get "https://192.168.39.189:8441/api/v1/nodes/functional-400359": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:41.346006  213888 out.go:239] * 
	W1108 23:44:41.346885  213888 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 23:44:41.349263  213888 out.go:177] 
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1787086a19180       5374347291230       1 second ago         Running             kube-apiserver            0                   7d9d51206fb22       kube-apiserver-functional-400359
	824ed4a510711       6e38f40d628db       12 seconds ago       Exited              storage-provisioner       3                   9c7477be15957       storage-provisioner
	7921f51c4026f       10baa1ca17068       56 seconds ago       Running             kube-controller-manager   2                   ca712d9c0441a       kube-controller-manager-functional-400359
	bff1a67a2e4bc       5374347291230       58 seconds ago       Created             kube-apiserver            1                   523d23a3366a5       kube-apiserver-functional-400359
	88c140ed6030d       ead0a4a53df89       About a minute ago   Running             coredns                   1                   8005a17990fd0       coredns-5dd5756b68-tqvtr
	fb3df666c8263       bfc896cf80fba       About a minute ago   Running             kube-proxy                1                   0d0883976452b       kube-proxy-wv6f7
	1d784d6322fa7       73deb9a3f7025       About a minute ago   Running             etcd                      1                   1274367410852       etcd-functional-400359
	2faf0584a90c9       10baa1ca17068       About a minute ago   Exited              kube-controller-manager   1                   ca712d9c0441a       kube-controller-manager-functional-400359
	a06cdad021ec7       6d1b4fd1b182d       About a minute ago   Running             kube-scheduler            1                   9bb1405590c60       kube-scheduler-functional-400359
	e502430453488       ead0a4a53df89       About a minute ago   Exited              coredns                   0                   8005a17990fd0       coredns-5dd5756b68-tqvtr
	998ca340aa83f       bfc896cf80fba       About a minute ago   Exited              kube-proxy                0                   0d0883976452b       kube-proxy-wv6f7
	daf40bd6e2a8e       6d1b4fd1b182d       About a minute ago   Exited              kube-scheduler            0                   9bb1405590c60       kube-scheduler-functional-400359
	46b02dbdf3f22       73deb9a3f7025       About a minute ago   Exited              etcd                      0                   1274367410852       etcd-functional-400359
	
	* 
	* ==> containerd <==
	* -- Journal begins at Wed 2023-11-08 23:42:35 UTC, ends at Wed 2023-11-08 23:45:18 UTC. --
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.122784712Z" level=info msg="shim disconnected" id=dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.122833279Z" level=warning msg="cleaning up after shim disconnected" id=dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.122842119Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.153972486Z" level=info msg="StopContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" returns successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.156265546Z" level=info msg="StopPodSandbox for \"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.156411974Z" level=info msg="Container to stop \"bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0\" must be in running or unknown state, current state \"CONTAINER_CREATED\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.156598790Z" level=info msg="Container to stop \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.208231957Z" level=info msg="shim disconnected" id=523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.208338377Z" level=warning msg="cleaning up after shim disconnected" id=523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0 namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.208351190Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.230602871Z" level=info msg="TearDown network for sandbox \"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0\" successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.230747079Z" level=info msg="StopPodSandbox for \"523d23a3366a5fc557a4272cae3560dee285f6cb9f2b24ee50f9723ce8880bc0\" returns successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.373793669Z" level=info msg="RemoveContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\""
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.381845817Z" level=info msg="RemoveContainer for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" returns successfully"
	Nov 08 23:45:11 functional-400359 containerd[2683]: time="2023-11-08T23:45:11.382697295Z" level=error msg="ContainerStatus for \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc58c905bfcc311a8499a0829bd9e11d64c680a5497cf0d7f449d1648572b32b\": not found"
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.081173534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-functional-400359,Uid:a075def9e32e694bce9f109a5666a324,Namespace:kube-system,Attempt:0,}"
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.139421216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.139950373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.140019518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.140188300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.617171843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-functional-400359,Uid:a075def9e32e694bce9f109a5666a324,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d9d51206fb22764283b3b6ff089269e466321b6246094f3810c55c50c4f0f08\""
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.626047387Z" level=info msg="CreateContainer within sandbox \"7d9d51206fb22764283b3b6ff089269e466321b6246094f3810c55c50c4f0f08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.681836981Z" level=info msg="CreateContainer within sandbox \"7d9d51206fb22764283b3b6ff089269e466321b6246094f3810c55c50c4f0f08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1787086a1918079953995cda98a8a3f069c2a2aaf5f1e187d78563422030fa96\""
	Nov 08 23:45:16 functional-400359 containerd[2683]: time="2023-11-08T23:45:16.682794269Z" level=info msg="StartContainer for \"1787086a1918079953995cda98a8a3f069c2a2aaf5f1e187d78563422030fa96\""
	Nov 08 23:45:17 functional-400359 containerd[2683]: time="2023-11-08T23:45:17.433064838Z" level=info msg="StartContainer for \"1787086a1918079953995cda98a8a3f069c2a2aaf5f1e187d78563422030fa96\" returns successfully"
	
	* 
	* ==> coredns [88c140ed6030d22284aaafb49382d15ef7da52d8beb9e058c36ea698c2910d04] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57342 - 44358 "HINFO IN 4361793349757605016.248109365602167116. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.135909373s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: unknown (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: unknown (get namespaces)
	
	* 
	* ==> coredns [e5024304534883a602aa8765639ff209648b3e4ce981260dfb50cd5186826dc5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51534 - 35900 "HINFO IN 2585345581505525764.4555830120890176857. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031001187s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-400359
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-400359
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e21c718ea4d79be9ab6c82476dffc8ce4079c94e
	                    minikube.k8s.io/name=functional-400359
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_08T23_43_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 08 Nov 2023 23:43:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-400359
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 08 Nov 2023 23:45:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 08 Nov 2023 23:44:24 +0000   Wed, 08 Nov 2023 23:43:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 08 Nov 2023 23:44:24 +0000   Wed, 08 Nov 2023 23:43:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 08 Nov 2023 23:44:24 +0000   Wed, 08 Nov 2023 23:43:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 08 Nov 2023 23:44:24 +0000   Wed, 08 Nov 2023 23:43:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.189
	  Hostname:    functional-400359
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa12a6704bf34e1d83876a2eb3b11647
	  System UUID:                fa12a670-4bf3-4e1d-8387-6a2eb3b11647
	  Boot ID:                    c3964329-2948-4c91-b6ae-11ab6cdcadb1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.8
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-tqvtr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     101s
	  kube-system                 etcd-functional-400359                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         113s
	  kube-system                 kube-apiserver-functional-400359             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-controller-manager-functional-400359    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-proxy-wv6f7                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-scheduler-functional-400359             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  Starting                 67s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node functional-400359 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node functional-400359 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node functional-400359 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                113s                 kubelet          Node functional-400359 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node functional-400359 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node functional-400359 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node functional-400359 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  113s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           101s                 node-controller  Node functional-400359 event: Registered Node functional-400359 in Controller
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node functional-400359 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node functional-400359 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x7 over 59s)    kubelet          Node functional-400359 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           43s                  node-controller  Node functional-400359 event: Registered Node functional-400359 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.156846] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.062315] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.304325] systemd-fstab-generator[561]: Ignoring "noauto" for root device
	[  +0.112180] systemd-fstab-generator[572]: Ignoring "noauto" for root device
	[  +0.151842] systemd-fstab-generator[585]: Ignoring "noauto" for root device
	[  +0.124353] systemd-fstab-generator[596]: Ignoring "noauto" for root device
	[  +0.268439] systemd-fstab-generator[623]: Ignoring "noauto" for root device
	[  +6.156386] systemd-fstab-generator[683]: Ignoring "noauto" for root device
	[Nov 8 23:43] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +9.282190] systemd-fstab-generator[1362]: Ignoring "noauto" for root device
	[ +18.264010] systemd-fstab-generator[2015]: Ignoring "noauto" for root device
	[  +0.177052] systemd-fstab-generator[2026]: Ignoring "noauto" for root device
	[  +0.171180] systemd-fstab-generator[2039]: Ignoring "noauto" for root device
	[  +0.169893] systemd-fstab-generator[2050]: Ignoring "noauto" for root device
	[  +0.296549] systemd-fstab-generator[2076]: Ignoring "noauto" for root device
	[Nov 8 23:44] systemd-fstab-generator[2615]: Ignoring "noauto" for root device
	[  +0.147087] systemd-fstab-generator[2626]: Ignoring "noauto" for root device
	[  +0.171247] systemd-fstab-generator[2639]: Ignoring "noauto" for root device
	[  +0.165487] systemd-fstab-generator[2650]: Ignoring "noauto" for root device
	[  +0.295897] systemd-fstab-generator[2676]: Ignoring "noauto" for root device
	[ +19.128891] systemd-fstab-generator[3485]: Ignoring "noauto" for root device
	[ +15.032820] kauditd_printk_skb: 23 callbacks suppressed
	
	* 
	* ==> etcd [1d784d6322fa72bf1ea8c9873171f75a644fcdac3d60a60b7253cea2aad58484] <==
	* {"level":"info","ts":"2023-11-08T23:44:10.907861Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-08T23:44:10.907973Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-11-08T23:44:10.908286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a switched to configuration voters=(8048648980531676538)"}
	{"level":"info","ts":"2023-11-08T23:44:10.908344Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","added-peer-id":"6fb28b9aae66857a","added-peer-peer-urls":["https://192.168.39.189:2380"]}
	{"level":"info","ts":"2023-11-08T23:44:10.908546Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:44:10.908577Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:44:10.919242Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:10.919299Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:10.919177Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-08T23:44:10.920701Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-08T23:44:10.920863Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6fb28b9aae66857a","initial-advertise-peer-urls":["https://192.168.39.189:2380"],"listen-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.189:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-08T23:44:12.571328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a received MsgPreVoteResp from 6fb28b9aae66857a at term 2"}
	{"level":"info","ts":"2023-11-08T23:44:12.571611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became candidate at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.571747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a received MsgVoteResp from 6fb28b9aae66857a at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.571885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6fb28b9aae66857a became leader at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.572003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6fb28b9aae66857a elected leader 6fb28b9aae66857a at term 3"}
	{"level":"info","ts":"2023-11-08T23:44:12.574123Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6fb28b9aae66857a","local-member-attributes":"{Name:functional-400359 ClientURLs:[https://192.168.39.189:2379]}","request-path":"/0/members/6fb28b9aae66857a/attributes","cluster-id":"f0bdb053fd9e03ec","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-08T23:44:12.574193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:44:12.575568Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T23:44:12.575581Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T23:44:12.57599Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T23:44:12.576127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:44:12.580777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.189:2379"}
	
	* 
	* ==> etcd [46b02dbdf3f22443678938ae41e97fbef5ff615bf6492aa752d605eaf59e9573] <==
	* {"level":"info","ts":"2023-11-08T23:43:21.2639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:43:21.265203Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.189:2379"}
	{"level":"info","ts":"2023-11-08T23:43:21.264037Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.264104Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-08T23:43:21.268038Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-08T23:43:21.273657Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-08T23:43:21.27674Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-08T23:43:21.306896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f0bdb053fd9e03ec","local-member-id":"6fb28b9aae66857a","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.332049Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:21.332311Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-08T23:43:43.658034Z","caller":"traceutil/trace.go:171","msg":"trace[1655151050] linearizableReadLoop","detail":"{readStateIndex:436; appliedIndex:435; }","duration":"158.288056ms","start":"2023-11-08T23:43:43.499691Z","end":"2023-11-08T23:43:43.657979Z","steps":["trace[1655151050] 'read index received'  (duration: 158.050466ms)","trace[1655151050] 'applied index is now lower than readState.Index'  (duration: 237.256µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-08T23:43:43.658216Z","caller":"traceutil/trace.go:171","msg":"trace[1004018470] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"165.867105ms","start":"2023-11-08T23:43:43.492343Z","end":"2023-11-08T23:43:43.65821Z","steps":["trace[1004018470] 'process raft request'  (duration: 165.460392ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-08T23:43:43.659133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.382515ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2023-11-08T23:43:43.659215Z","caller":"traceutil/trace.go:171","msg":"trace[1204654578] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:419; }","duration":"159.531169ms","start":"2023-11-08T23:43:43.499663Z","end":"2023-11-08T23:43:43.659194Z","steps":["trace[1204654578] 'agreement among raft nodes before linearized reading'  (duration: 158.722284ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T23:43:49.836017Z","caller":"traceutil/trace.go:171","msg":"trace[1640228342] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"142.995238ms","start":"2023-11-08T23:43:49.693Z","end":"2023-11-08T23:43:49.835995Z","steps":["trace[1640228342] 'process raft request'  (duration: 142.737466ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-08T23:44:09.257705Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-11-08T23:44:09.257894Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-400359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	{"level":"warn","ts":"2023-11-08T23:44:09.258128Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.258264Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.273807Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-08T23:44:09.274055Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.189:2379: use of closed network connection"}
	{"level":"info","ts":"2023-11-08T23:44:09.274266Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6fb28b9aae66857a","current-leader-member-id":"6fb28b9aae66857a"}
	{"level":"info","ts":"2023-11-08T23:44:09.277371Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:09.277689Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.189:2380"}
	{"level":"info","ts":"2023-11-08T23:44:09.277704Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-400359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.189:2380"],"advertise-client-urls":["https://192.168.39.189:2379"]}
	
	* 
	* ==> kernel <==
	*  23:45:20 up 2 min,  0 users,  load average: 1.63, 0.82, 0.32
	Linux functional-400359 5.10.57 #1 SMP Tue Nov 7 06:51:54 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [1787086a1918079953995cda98a8a3f069c2a2aaf5f1e187d78563422030fa96] <==
	* I1108 23:45:19.715872       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1108 23:45:19.728666       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1108 23:45:19.728682       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1108 23:45:19.991427       1 controller.go:134] Starting OpenAPI controller
	I1108 23:45:19.991550       1 controller.go:85] Starting OpenAPI V3 controller
	I1108 23:45:19.991571       1 naming_controller.go:291] Starting NamingConditionController
	I1108 23:45:19.991585       1 establishing_controller.go:76] Starting EstablishingController
	I1108 23:45:19.991599       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1108 23:45:19.991610       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1108 23:45:19.991620       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1108 23:45:19.991965       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1108 23:45:19.992118       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 23:45:20.056846       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 23:45:20.109545       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1108 23:45:20.114588       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 23:45:20.114677       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 23:45:20.115322       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 23:45:20.115335       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 23:45:20.115789       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 23:45:20.115916       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 23:45:20.130696       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 23:45:20.130728       1 aggregator.go:166] initial CRD sync complete...
	I1108 23:45:20.130733       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 23:45:20.130738       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 23:45:20.130743       1 cache.go:39] Caches are synced for autoregister controller
	
	* 
	* ==> kube-apiserver [bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0] <==
	* 
	* ==> kube-controller-manager [2faf0584a90c98fa3ae503339949f6fdc901e881c318c3b0b4ca3323123ba1a0] <==
	* I1108 23:44:10.838065       1 serving.go:348] Generated self-signed cert in-memory
	I1108 23:44:11.452649       1 controllermanager.go:189] "Starting" version="v1.28.3"
	I1108 23:44:11.452696       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 23:44:11.454751       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1108 23:44:11.455029       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 23:44:11.455309       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1108 23:44:11.455704       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1108 23:44:11.475414       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I1108 23:44:11.576258       1 shared_informer.go:318] Caches are synced for tokens
	I1108 23:44:12.801347       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1108 23:44:12.802296       1 cleaner.go:83] "Starting CSR cleaner controller"
	I1108 23:44:12.899559       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I1108 23:44:12.899798       1 namespace_controller.go:197] "Starting namespace controller"
	I1108 23:44:12.900091       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I1108 23:44:12.926665       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I1108 23:44:12.927319       1 stateful_set.go:161] "Starting stateful set controller"
	I1108 23:44:12.927524       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I1108 23:44:12.935324       1 controllermanager.go:642] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1108 23:44:12.935710       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I1108 23:44:12.936165       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	F1108 23:44:12.956649       1 client_builder_dynamic.go:174] Get "https://192.168.39.189:8441/api/v1/namespaces/kube-system/serviceaccounts/resourcequota-controller": dial tcp 192.168.39.189:8441: connect: connection refused
	
	* 
	* ==> kube-controller-manager [7921f51c4026fd4eadeac9dbccfa803fc415bc1ed99e900bd95f598a614d8315] <==
	* E1108 23:45:19.933144       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E1108 23:45:19.933165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Secret: unknown (get secrets)
	E1108 23:45:19.933183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodTemplate: unknown (get podtemplates)
	E1108 23:45:19.933207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PriorityClass: unknown (get priorityclasses.scheduling.k8s.io)
	E1108 23:45:19.933225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.NetworkPolicy: unknown (get networkpolicies.networking.k8s.io)
	E1108 23:45:19.933239       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.LimitRange: unknown (get limitranges)
	E1108 23:45:19.933254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ControllerRevision: unknown (get controllerrevisions.apps)
	E1108 23:45:19.933311       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RoleBinding: unknown (get rolebindings.rbac.authorization.k8s.io)
	E1108 23:45:19.933333       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: unknown
	E1108 23:45:19.933351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E1108 23:45:19.933368       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: unknown (get runtimeclasses.node.k8s.io)
	E1108 23:45:19.933385       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services)
	E1108 23:45:19.933400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ClusterRoleBinding: unknown (get clusterrolebindings.rbac.authorization.k8s.io)
	E1108 23:45:19.933415       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.IngressClass: unknown (get ingressclasses.networking.k8s.io)
	E1108 23:45:19.933429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Deployment: unknown (get deployments.apps)
	E1108 23:45:19.933518       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E1108 23:45:19.933556       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E1108 23:45:19.933569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)
	E1108 23:45:19.933579       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ResourceQuota: unknown (get resourcequotas)
	E1108 23:45:19.933589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces)
	E1108 23:45:19.933599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E1108 23:45:19.946914       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E1108 23:45:19.947036       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CronJob: unknown (get cronjobs.batch)
	E1108 23:45:19.947097       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.VolumeAttachment: unknown (get volumeattachments.storage.k8s.io)
	E1108 23:45:20.031830       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CertificateSigningRequest: unknown (get certificatesigningrequests.certificates.k8s.io)
	
	* 
	* ==> kube-proxy [998ca340aa83f2a4ba2b50d7b4bff253c7fe93c3cf9c0f6737620c9ee77a4ea1] <==
	* I1108 23:43:40.754980       1 server_others.go:69] "Using iptables proxy"
	I1108 23:43:40.769210       1 node.go:141] Successfully retrieved node IP: 192.168.39.189
	I1108 23:43:40.838060       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1108 23:43:40.838106       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1108 23:43:40.841931       1 server_others.go:152] "Using iptables Proxier"
	I1108 23:43:40.842026       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 23:43:40.842300       1 server.go:846] "Version info" version="v1.28.3"
	I1108 23:43:40.842337       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 23:43:40.843102       1 config.go:188] "Starting service config controller"
	I1108 23:43:40.843156       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 23:43:40.843175       1 config.go:97] "Starting endpoint slice config controller"
	I1108 23:43:40.843178       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 23:43:40.843838       1 config.go:315] "Starting node config controller"
	I1108 23:43:40.843878       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 23:43:40.943579       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 23:43:40.943667       1 shared_informer.go:318] Caches are synced for service config
	I1108 23:43:40.943937       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [fb3df666c8263c19fd9a028191dcb6e116547d67a9bf7f535ab103998f60679d] <==
	* I1108 23:44:13.012381       1 shared_informer.go:311] Waiting for caches to sync for node config
	W1108 23:44:13.012621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.012810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.013169       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.189:8441: connect: connection refused' (may retry after sleeping)
	W1108 23:44:13.815291       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.815363       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:13.950038       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:13.950102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:14.326340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:14.326643       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:15.820268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:15.820340       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:16.787304       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:16.787347       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8441/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:17.093198       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:17.093270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:19.899967       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:19.900010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-400359&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:20.381161       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	E1108 23:44:20.381245       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.189:8441: connect: connection refused
	W1108 23:44:24.387034       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	E1108 23:44:24.387290       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	I1108 23:44:29.107551       1 shared_informer.go:318] Caches are synced for service config
	I1108 23:44:29.513134       1 shared_informer.go:318] Caches are synced for node config
	I1108 23:44:35.808555       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a06cdad021ec7e1e28779a525beede6288ae5f847a64e005969e95c7cf80f00a] <==
	* I1108 23:44:12.864532       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 23:44:12.864566       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 23:44:12.864879       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 23:44:12.961705       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1108 23:44:12.965186       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 23:44:12.965350       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1108 23:44:24.314857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	E1108 23:44:24.314957       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	E1108 23:44:24.319832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E1108 23:44:24.320160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: unknown (get pods)
	E1108 23:44:24.320904       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)
	E1108 23:44:24.321298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: unknown (get services)
	E1108 23:44:24.321419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)
	E1108 23:44:24.322244       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E1108 23:44:24.322300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: unknown (get namespaces)
	E1108 23:44:24.322320       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E1108 23:44:24.324606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E1108 23:44:24.328639       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes)
	E1108 23:44:24.328706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E1108 23:44:24.328951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E1108 23:44:24.401809       1 reflector.go:147] pkg/authentication/request/headerrequest/requestheader_controller.go:172: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	E1108 23:45:19.867616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)
	E1108 23:45:19.867896       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: unknown (get nodes)
	E1108 23:45:19.867929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E1108 23:45:19.897687       1 reflector.go:147] pkg/authentication/request/headerrequest/requestheader_controller.go:172: Failed to watch *v1.ConfigMap: unknown (get configmaps)
	
	* 
	* ==> kube-scheduler [daf40bd6e2a8ef19adeffd9a21c291c4492278b21c25346b8b1c6c151d6ce2a9] <==
	* E1108 23:43:23.555057       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 23:43:23.555310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1108 23:43:23.555637       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 23:43:24.357554       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1108 23:43:24.357652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 23:43:24.363070       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 23:43:24.363147       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 23:43:24.439814       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 23:43:24.439863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 23:43:24.511419       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 23:43:24.511725       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1108 23:43:24.521064       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1108 23:43:24.521357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1108 23:43:24.636054       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 23:43:24.636113       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1108 23:43:24.742651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1108 23:43:24.742701       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1108 23:43:24.766583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 23:43:24.766665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 23:43:24.821852       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 23:43:24.821977       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1108 23:43:26.911793       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 23:44:09.072908       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1108 23:44:09.073170       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1108 23:44:09.073383       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-11-08 23:42:35 UTC, ends at Wed 2023-11-08 23:45:21 UTC. --
	Nov 08 23:45:13 functional-400359 kubelet[3491]: I1108 23:45:13.073105    3491 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="782fbbe1f7d627cd92711fb14a0b0813" path="/var/lib/kubelet/pods/782fbbe1f7d627cd92711fb14a0b0813/volumes"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.587992    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.588871    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.589147    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.589369    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.589681    3491 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"functional-400359\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-400359?timeout=10s\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:15 functional-400359 kubelet[3491]: E1108 23:45:15.589698    3491 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Nov 08 23:45:16 functional-400359 kubelet[3491]: I1108 23:45:16.071801    3491 status_manager.go:853] "Failed to get status for pod" podUID="926dd51d8b9a510a42b3d2d730469c12" pod="kube-system/kube-controller-manager-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:16 functional-400359 kubelet[3491]: I1108 23:45:16.072135    3491 status_manager.go:853] "Failed to get status for pod" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:16 functional-400359 kubelet[3491]: I1108 23:45:16.077939    3491 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-400359" podUID="9b2db385-150c-4599-b59e-165208edd076"
	Nov 08 23:45:16 functional-400359 kubelet[3491]: E1108 23:45:16.078880    3491 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-400359"
	Nov 08 23:45:17 functional-400359 kubelet[3491]: I1108 23:45:17.072141    3491 scope.go:117] "RemoveContainer" containerID="824ed4a51071156e47d1202f5d0c470369342d44f391048bf2efb68837cdac0d"
	Nov 08 23:45:17 functional-400359 kubelet[3491]: I1108 23:45:17.074271    3491 status_manager.go:853] "Failed to get status for pod" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:17 functional-400359 kubelet[3491]: I1108 23:45:17.074836    3491 status_manager.go:853] "Failed to get status for pod" podUID="926dd51d8b9a510a42b3d2d730469c12" pod="kube-system/kube-controller-manager-functional-400359" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-400359\": dial tcp 192.168.39.189:8441: connect: connection refused"
	Nov 08 23:45:17 functional-400359 kubelet[3491]: E1108 23:45:17.079703    3491 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(01aed977-1439-433c-b8b1-869c92fcd9e2)\"" pod="kube-system/storage-provisioner" podUID="01aed977-1439-433c-b8b1-869c92fcd9e2"
	Nov 08 23:45:18 functional-400359 kubelet[3491]: I1108 23:45:18.402550    3491 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-400359" podUID="9b2db385-150c-4599-b59e-165208edd076"
	Nov 08 23:45:20 functional-400359 kubelet[3491]: E1108 23:45:20.009031    3491 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Nov 08 23:45:20 functional-400359 kubelet[3491]: E1108 23:45:20.009075    3491 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Nov 08 23:45:20 functional-400359 kubelet[3491]: I1108 23:45:20.162797    3491 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-400359"
	Nov 08 23:45:20 functional-400359 kubelet[3491]: I1108 23:45:20.407313    3491 kubelet.go:1872] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-400359" podUID="9b2db385-150c-4599-b59e-165208edd076"
	Nov 08 23:45:21 functional-400359 kubelet[3491]: E1108 23:45:21.103916    3491 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 08 23:45:21 functional-400359 kubelet[3491]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 08 23:45:21 functional-400359 kubelet[3491]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 08 23:45:21 functional-400359 kubelet[3491]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 08 23:45:21 functional-400359 kubelet[3491]: I1108 23:45:21.106137    3491 scope.go:117] "RemoveContainer" containerID="bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0"
	
	* 
	* ==> storage-provisioner [824ed4a51071156e47d1202f5d0c470369342d44f391048bf2efb68837cdac0d] <==
	* I1108 23:45:05.218771       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 23:45:05.220296       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 23:45:20.877055  214586 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-08T23:45:20Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2023-11-08T23:45:20Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-functional-400359_782fbbe1f7d627cd92711fb14a0b0813/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: kube-apiserver [bff1a67a2e4bc7b9758c4313883821568fe6cdd5f73960c615f53ff30f3487c0]

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-400359 -n functional-400359
helpers_test.go:261: (dbg) Run:  kubectl --context functional-400359 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-apiserver-functional-400359
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/NodeLabels]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-400359 describe pod kube-apiserver-functional-400359
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-400359 describe pod kube-apiserver-functional-400359: exit status 1 (96.198125ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-apiserver-functional-400359" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-400359 describe pod kube-apiserver-functional-400359: exit status 1
--- FAIL: TestFunctional/parallel/NodeLabels (6.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-400359 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1438: (dbg) Non-zero exit: kubectl --context functional-400359 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: exit status 1 (61.467723ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.39.189:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.39.189:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1442: failed to create hello-node deployment with this command "kubectl --context functional-400359 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 service list
functional_test.go:1458: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 service list: exit status 119 (376.375698ms)

                                                
                                                
-- stdout --
	* This control plane is not running! (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-400359"

                                                
                                                
-- /stdout --
** stderr ** 
	! This is unusual - you may want to investigate using "minikube logs -p functional-400359"

                                                
                                                
** /stderr **
functional_test.go:1460: failed to do service list. args "out/minikube-linux-amd64 -p functional-400359 service list" : exit status 119
functional_test.go:1463: expected 'service list' to contain *hello-node* but got -"* This control plane is not running! (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-400359\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (4.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 service list -o json: (4.301663608s)
functional_test.go:1493: Took "4.301789286s" to run "out/minikube-linux-amd64 -p functional-400359 service list -o json"
functional_test.go:1497: expected the json of 'service list' to include "hello-node" but got *"[{\"Namespace\":\"default\",\"Name\":\"kubernetes\",\"URLs\":[],\"PortNames\":[\"No node port\"]},{\"Namespace\":\"kube-system\",\"Name\":\"kube-dns\",\"URLs\":[],\"PortNames\":[\"No node port\"]}]"*. args: "out/minikube-linux-amd64 -p functional-400359 service list -o json"
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 service --namespace=default --https --url hello-node: exit status 115 (367.320956ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1510: failed to get service url. args "out/minikube-linux-amd64 -p functional-400359 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 service hello-node --url --format={{.IP}}: exit status 115 (489.234123ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1541: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-400359 service hello-node --url --format={{.IP}}": exit status 115
functional_test.go:1547: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 service hello-node --url: exit status 115 (338.694055ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1560: failed to get service url. args: "out/minikube-linux-amd64 -p functional-400359 service hello-node --url": exit status 115
functional_test.go:1564: found endpoint for hello-node: 
functional_test.go:1572: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    

Test pass (257/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.43
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.3/json-events 6.53
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.16
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.6
20 TestOffline 153.02
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
25 TestAddons/Setup 163.98
27 TestAddons/parallel/Registry 14.75
28 TestAddons/parallel/Ingress 22.13
29 TestAddons/parallel/InspektorGadget 10.94
30 TestAddons/parallel/MetricsServer 6.13
31 TestAddons/parallel/HelmTiller 11.33
33 TestAddons/parallel/CSI 66.45
34 TestAddons/parallel/Headlamp 14.99
35 TestAddons/parallel/CloudSpanner 5.81
36 TestAddons/parallel/LocalPath 53.75
37 TestAddons/parallel/NvidiaDevicePlugin 6.04
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/StoppedEnableDisable 102.66
42 TestCertOptions 96.58
43 TestCertExpiration 341.48
45 TestForceSystemdFlag 107.35
46 TestForceSystemdEnv 70.39
48 TestKVMDriverInstallOrUpdate 2.79
53 TestErrorSpam/start 0.42
54 TestErrorSpam/status 0.85
55 TestErrorSpam/pause 1.66
56 TestErrorSpam/unpause 1.75
57 TestErrorSpam/stop 2.3
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 81.19
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 6.73
64 TestFunctional/serial/KubeContext 0.05
65 TestFunctional/serial/KubectlGetPods 0.09
68 TestFunctional/serial/CacheCmd/cache/add_remote 4.09
69 TestFunctional/serial/CacheCmd/cache/add_local 1.81
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
71 TestFunctional/serial/CacheCmd/cache/list 0.07
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.26
74 TestFunctional/serial/CacheCmd/cache/delete 0.13
75 TestFunctional/serial/MinikubeKubectlCmd 0.13
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
79 TestFunctional/serial/LogsCmd 1.43
83 TestFunctional/parallel/ConfigCmd 0.5
84 TestFunctional/parallel/DashboardCmd 16.55
85 TestFunctional/parallel/DryRun 0.36
86 TestFunctional/parallel/InternationalLanguage 0.22
87 TestFunctional/parallel/StatusCmd 1.09
91 TestFunctional/parallel/ServiceCmdConnect 40.18
92 TestFunctional/parallel/AddonsCmd 0.18
93 TestFunctional/parallel/PersistentVolumeClaim 111.57
95 TestFunctional/parallel/SSHCmd 0.53
96 TestFunctional/parallel/CpCmd 1.08
98 TestFunctional/parallel/FileSync 0.27
99 TestFunctional/parallel/CertSync 1.56
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
107 TestFunctional/parallel/License 0.2
108 TestFunctional/parallel/Version/short 0.07
109 TestFunctional/parallel/Version/components 0.66
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
114 TestFunctional/parallel/ImageCommands/ImageBuild 3.73
115 TestFunctional/parallel/ImageCommands/Setup 0.9
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.68
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.74
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
134 TestFunctional/parallel/ProfileCmd/profile_list 0.36
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.54
136 TestFunctional/parallel/MountCmd/any-port 22.6
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.13
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.12
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.49
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.16
145 TestFunctional/parallel/MountCmd/specific-port 1.9
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.47
147 TestFunctional/delete_addon-resizer_images 0.07
148 TestFunctional/delete_my-image_image 0.02
149 TestFunctional/delete_minikube_cached_images 0.02
153 TestIngressAddonLegacy/StartLegacyK8sCluster 84.73
155 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.14
156 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.65
157 TestIngressAddonLegacy/serial/ValidateIngressAddons 41.51
160 TestJSONOutput/start/Command 81.12
161 TestJSONOutput/start/Audit 0
163 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/pause/Command 0.71
167 TestJSONOutput/pause/Audit 0
169 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/unpause/Command 0.66
173 TestJSONOutput/unpause/Audit 0
175 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/stop/Command 7.12
179 TestJSONOutput/stop/Audit 0
181 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
183 TestErrorJSONOutput 0.24
188 TestMainNoArgs 0.07
189 TestMinikubeProfile 132.19
192 TestMountStart/serial/StartWithMountFirst 29.52
193 TestMountStart/serial/VerifyMountFirst 0.42
194 TestMountStart/serial/StartWithMountSecond 29.05
195 TestMountStart/serial/VerifyMountSecond 0.44
196 TestMountStart/serial/DeleteFirst 1.2
197 TestMountStart/serial/VerifyMountPostDelete 0.51
198 TestMountStart/serial/Stop 1.24
199 TestMountStart/serial/RestartStopped 24.76
200 TestMountStart/serial/VerifyMountPostStop 0.42
203 TestMultiNode/serial/FreshStart2Nodes 127.64
204 TestMultiNode/serial/DeployApp2Nodes 3.84
205 TestMultiNode/serial/PingHostFrom2Pods 0.97
206 TestMultiNode/serial/AddNode 43.07
207 TestMultiNode/serial/ProfileList 0.24
208 TestMultiNode/serial/CopyFile 8.33
209 TestMultiNode/serial/StopNode 12.4
210 TestMultiNode/serial/StartAfterStop 28.2
211 TestMultiNode/serial/RestartKeepsNodes 332.52
212 TestMultiNode/serial/DeleteNode 1.98
213 TestMultiNode/serial/StopMultiNode 183.54
214 TestMultiNode/serial/RestartMultiNode 94.23
215 TestMultiNode/serial/ValidateNameConflict 70.19
220 TestPreload 245.17
222 TestScheduledStopUnix 139.96
226 TestRunningBinaryUpgrade 183.85
228 TestKubernetesUpgrade 232.26
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
235 TestNoKubernetes/serial/StartWithK8s 123.79
240 TestNetworkPlugins/group/false 3.73
244 TestNoKubernetes/serial/StartWithStopK8s 25.83
245 TestNoKubernetes/serial/Start 31.27
246 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
247 TestNoKubernetes/serial/ProfileList 0.82
248 TestNoKubernetes/serial/Stop 2.12
249 TestNoKubernetes/serial/StartNoArgs 42.08
250 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
251 TestStoppedBinaryUpgrade/Setup 0.48
252 TestStoppedBinaryUpgrade/Upgrade 166.67
261 TestPause/serial/Start 137.47
262 TestNetworkPlugins/group/auto/Start 126.61
263 TestStoppedBinaryUpgrade/MinikubeLogs 1.51
264 TestNetworkPlugins/group/kindnet/Start 103.09
265 TestNetworkPlugins/group/calico/Start 115.36
266 TestNetworkPlugins/group/auto/KubeletFlags 0.29
267 TestNetworkPlugins/group/auto/NetCatPod 13.53
268 TestPause/serial/SecondStartNoReconfiguration 7.86
269 TestPause/serial/Pause 0.85
270 TestPause/serial/VerifyStatus 0.33
271 TestNetworkPlugins/group/auto/DNS 26.23
272 TestPause/serial/Unpause 0.76
273 TestPause/serial/PauseAgain 1.47
274 TestPause/serial/DeletePaused 1.19
275 TestPause/serial/VerifyDeletedResources 0.55
276 TestNetworkPlugins/group/custom-flannel/Start 107.76
277 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
278 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
279 TestNetworkPlugins/group/kindnet/NetCatPod 10.33
280 TestNetworkPlugins/group/auto/Localhost 0.18
281 TestNetworkPlugins/group/auto/HairPin 0.17
282 TestNetworkPlugins/group/kindnet/DNS 0.22
283 TestNetworkPlugins/group/kindnet/Localhost 0.41
284 TestNetworkPlugins/group/kindnet/HairPin 0.38
285 TestNetworkPlugins/group/enable-default-cni/Start 97.8
286 TestNetworkPlugins/group/flannel/Start 134.21
287 TestNetworkPlugins/group/calico/ControllerPod 5.03
288 TestNetworkPlugins/group/calico/KubeletFlags 0.27
289 TestNetworkPlugins/group/calico/NetCatPod 13.46
290 TestNetworkPlugins/group/calico/DNS 0.26
291 TestNetworkPlugins/group/calico/Localhost 0.23
292 TestNetworkPlugins/group/calico/HairPin 0.24
293 TestNetworkPlugins/group/bridge/Start 88.5
294 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
295 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.44
296 TestNetworkPlugins/group/custom-flannel/DNS 0.25
297 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
298 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
299 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
300 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.63
302 TestStartStop/group/old-k8s-version/serial/FirstStart 136.54
303 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
304 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
305 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
307 TestStartStop/group/no-preload/serial/FirstStart 90.88
308 TestNetworkPlugins/group/flannel/ControllerPod 5.04
309 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
310 TestNetworkPlugins/group/flannel/NetCatPod 10.51
311 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
312 TestNetworkPlugins/group/bridge/NetCatPod 10.38
313 TestNetworkPlugins/group/flannel/DNS 0.23
314 TestNetworkPlugins/group/flannel/Localhost 0.17
315 TestNetworkPlugins/group/flannel/HairPin 0.16
316 TestNetworkPlugins/group/bridge/DNS 21.2
318 TestStartStop/group/embed-certs/serial/FirstStart 129.64
319 TestNetworkPlugins/group/bridge/Localhost 0.16
320 TestNetworkPlugins/group/bridge/HairPin 0.18
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 97.64
323 TestStartStop/group/no-preload/serial/DeployApp 9.69
324 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.29
325 TestStartStop/group/no-preload/serial/Stop 92.9
326 TestStartStop/group/old-k8s-version/serial/DeployApp 7.85
327 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.24
328 TestStartStop/group/old-k8s-version/serial/Stop 93
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.56
330 TestStartStop/group/embed-certs/serial/DeployApp 8.43
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.23
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.88
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.26
334 TestStartStop/group/embed-certs/serial/Stop 92.4
335 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
336 TestStartStop/group/no-preload/serial/SecondStart 307.8
337 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
338 TestStartStop/group/old-k8s-version/serial/SecondStart 348.31
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 311.94
341 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
342 TestStartStop/group/embed-certs/serial/SecondStart 344.2
343 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
344 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
345 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
346 TestStartStop/group/no-preload/serial/Pause 2.93
348 TestStartStop/group/newest-cni/serial/FirstStart 83.19
349 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
350 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
351 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
352 TestStartStop/group/old-k8s-version/serial/Pause 3.02
353 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.1
357 TestStartStop/group/newest-cni/serial/DeployApp 0
358 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.46
359 TestStartStop/group/newest-cni/serial/Stop 7.13
360 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
361 TestStartStop/group/newest-cni/serial/SecondStart 49.99
362 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
363 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
364 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
365 TestStartStop/group/embed-certs/serial/Pause 2.79
366 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
367 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
369 TestStartStop/group/newest-cni/serial/Pause 2.66
x
+
TestDownloadOnly/v1.16.0/json-events (9.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-081259 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-081259 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (9.430566874s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-081259
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-081259: exit status 85 (83.580741ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-081259 | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC |          |
	|         | -p download-only-081259        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 23:35:03
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 23:35:03.240947  208975 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:35:03.241141  208975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:35:03.241155  208975 out.go:309] Setting ErrFile to fd 2...
	I1108 23:35:03.241162  208975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:35:03.241386  208975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
	W1108 23:35:03.241560  208975 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17586-201782/.minikube/config/config.json: open /home/jenkins/minikube-integration/17586-201782/.minikube/config/config.json: no such file or directory
	I1108 23:35:03.242358  208975 out.go:303] Setting JSON to true
	I1108 23:35:03.243275  208975 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":22657,"bootTime":1699463846,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 23:35:03.243345  208975 start.go:138] virtualization: kvm guest
	I1108 23:35:03.246454  208975 out.go:97] [download-only-081259] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	W1108 23:35:03.246621  208975 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball: no such file or directory
	I1108 23:35:03.246787  208975 notify.go:220] Checking for updates...
	I1108 23:35:03.248317  208975 out.go:169] MINIKUBE_LOCATION=17586
	I1108 23:35:03.250173  208975 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:35:03.251908  208975 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:35:03.253626  208975 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	I1108 23:35:03.255243  208975 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1108 23:35:03.258183  208975 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 23:35:03.258541  208975 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:35:03.296155  208975 out.go:97] Using the kvm2 driver based on user configuration
	I1108 23:35:03.296198  208975 start.go:298] selected driver: kvm2
	I1108 23:35:03.296207  208975 start.go:902] validating driver "kvm2" against <nil>
	I1108 23:35:03.296598  208975 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:35:03.296761  208975 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17586-201782/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 23:35:03.314900  208975 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 23:35:03.314983  208975 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1108 23:35:03.315521  208975 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1108 23:35:03.315747  208975 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 23:35:03.315827  208975 cni.go:84] Creating CNI manager for ""
	I1108 23:35:03.315876  208975 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:35:03.315895  208975 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1108 23:35:03.315909  208975 start_flags.go:323] config:
	{Name:download-only-081259 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-081259 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:35:03.316196  208975 iso.go:125] acquiring lock: {Name:mk33479b76ec6919fe69628bcf9e99f9786f49af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:35:03.318631  208975 out.go:97] Downloading VM boot image ...
	I1108 23:35:03.318697  208975 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17586-201782/.minikube/cache/iso/amd64/minikube-v1.32.1-amd64.iso
	I1108 23:35:05.756198  208975 out.go:97] Starting control plane node download-only-081259 in cluster download-only-081259
	I1108 23:35:05.756222  208975 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1108 23:35:05.789837  208975 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1108 23:35:05.789888  208975 cache.go:56] Caching tarball of preloaded images
	I1108 23:35:05.790059  208975 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1108 23:35:05.792435  208975 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1108 23:35:05.792469  208975 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1108 23:35:05.824618  208975 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-081259"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (6.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-081259 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-081259 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (6.524944856s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (6.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-081259
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-081259: exit status 85 (83.389682ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-081259 | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC |          |
	|         | -p download-only-081259        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-081259 | jenkins | v1.32.0 | 08 Nov 23 23:35 UTC |          |
	|         | -p download-only-081259        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/08 23:35:12
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 23:35:12.761619  209032 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:35:12.761933  209032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:35:12.761944  209032 out.go:309] Setting ErrFile to fd 2...
	I1108 23:35:12.761949  209032 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:35:12.762118  209032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
	W1108 23:35:12.762238  209032 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17586-201782/.minikube/config/config.json: open /home/jenkins/minikube-integration/17586-201782/.minikube/config/config.json: no such file or directory
	I1108 23:35:12.762661  209032 out.go:303] Setting JSON to true
	I1108 23:35:12.763500  209032 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":22667,"bootTime":1699463846,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 23:35:12.763558  209032 start.go:138] virtualization: kvm guest
	I1108 23:35:12.766055  209032 out.go:97] [download-only-081259] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 23:35:12.767738  209032 out.go:169] MINIKUBE_LOCATION=17586
	I1108 23:35:12.766268  209032 notify.go:220] Checking for updates...
	I1108 23:35:12.770883  209032 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:35:12.772539  209032 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:35:12.774155  209032 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	I1108 23:35:12.775727  209032 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1108 23:35:12.778920  209032 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 23:35:12.779449  209032 config.go:182] Loaded profile config "download-only-081259": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1108 23:35:12.779522  209032 start.go:810] api.Load failed for download-only-081259: filestore "download-only-081259": Docker machine "download-only-081259" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1108 23:35:12.779623  209032 driver.go:378] Setting default libvirt URI to qemu:///system
	W1108 23:35:12.779658  209032 start.go:810] api.Load failed for download-only-081259: filestore "download-only-081259": Docker machine "download-only-081259" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1108 23:35:12.814437  209032 out.go:97] Using the kvm2 driver based on existing profile
	I1108 23:35:12.814490  209032 start.go:298] selected driver: kvm2
	I1108 23:35:12.814498  209032 start.go:902] validating driver "kvm2" against &{Name:download-only-081259 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-on
ly-081259 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:35:12.815055  209032 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:35:12.815146  209032 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17586-201782/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1108 23:35:12.831232  209032 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1108 23:35:12.832077  209032 cni.go:84] Creating CNI manager for ""
	I1108 23:35:12.832114  209032 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1108 23:35:12.832129  209032 start_flags.go:323] config:
	{Name:download-only-081259 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-081259 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:35:12.832334  209032 iso.go:125] acquiring lock: {Name:mk33479b76ec6919fe69628bcf9e99f9786f49af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 23:35:12.834498  209032 out.go:97] Starting control plane node download-only-081259 in cluster download-only-081259
	I1108 23:35:12.834528  209032 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:35:12.861533  209032 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4
	I1108 23:35:12.861576  209032 cache.go:56] Caching tarball of preloaded images
	I1108 23:35:12.861745  209032 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1108 23:35:12.864052  209032 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1108 23:35:12.864069  209032 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4 ...
	I1108 23:35:12.896242  209032 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:1f1245a53374a4d119b818e36f0d29e2 -> /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4
	I1108 23:35:17.448434  209032 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4 ...
	I1108 23:35:17.448538  209032 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17586-201782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-081259"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-081259
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-250626 --alsologtostderr --binary-mirror http://127.0.0.1:43385 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-250626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-250626
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (153.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-504034 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-504034 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m31.736352856s)
helpers_test.go:175: Cleaning up "offline-containerd-504034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-504034
E1109 00:18:52.926483  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-504034: (1.284632228s)
--- PASS: TestOffline (153.02s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-040821
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-040821: exit status 85 (74.391842ms)

                                                
                                                
-- stdout --
	* Profile "addons-040821" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-040821"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-040821
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-040821: exit status 85 (75.715369ms)

                                                
                                                
-- stdout --
	* Profile "addons-040821" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-040821"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (163.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-040821 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-040821 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m43.978010546s)
--- PASS: TestAddons/Setup (163.98s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 21.568551ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-lzrgh" [e3c5f420-9dd5-4de2-a8fa-7e52767170d1] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.023847433s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vw587" [5da725f6-2d22-487a-8775-d468bf8394bb] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.027972112s
addons_test.go:339: (dbg) Run:  kubectl --context addons-040821 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-040821 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-040821 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.765553165s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-040821 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-040821 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-040821 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b21e6f15-7f62-40b8-a73b-d90fb455c0aa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b21e6f15-7f62-40b8-a73b-d90fb455c0aa] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.013772382s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-040821 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.182
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-040821 addons disable ingress-dns --alsologtostderr -v=1: (1.913824695s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-040821 addons disable ingress --alsologtostderr -v=1: (7.950463201s)
--- PASS: TestAddons/parallel/Ingress (22.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-87bxg" [d52fba5f-222e-462a-9ab9-1bf2c7078339] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.014020336s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-040821
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-040821: (5.928744511s)
--- PASS: TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.13s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 21.90401ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-f5xqz" [582f6f5c-ba41-422e-b757-5b93b41ce203] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.018468713s
addons_test.go:414: (dbg) Run:  kubectl --context addons-040821 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:431: (dbg) Done: out/minikube-linux-amd64 -p addons-040821 addons disable metrics-server --alsologtostderr -v=1: (1.017463234s)
--- PASS: TestAddons/parallel/MetricsServer (6.13s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.33s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.296231ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-zpmzq" [56010cb9-4fa1-42f9-b511-359819a512e4] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013615382s
addons_test.go:472: (dbg) Run:  kubectl --context addons-040821 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-040821 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.636180215s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 24.123571ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-040821 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-040821 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [eec42d21-6339-4ccb-a500-7e91001caf70] Pending
helpers_test.go:344: "task-pv-pod" [eec42d21-6339-4ccb-a500-7e91001caf70] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [eec42d21-6339-4ccb-a500-7e91001caf70] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.032415435s
addons_test.go:583: (dbg) Run:  kubectl --context addons-040821 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-040821 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-040821 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-040821 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-040821 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-040821 delete pod task-pv-pod: (1.334622918s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-040821 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-040821 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-040821 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [29fa3216-a65d-40d1-aa4b-3f8dc7d42898] Pending
helpers_test.go:344: "task-pv-pod-restore" [29fa3216-a65d-40d1-aa4b-3f8dc7d42898] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [29fa3216-a65d-40d1-aa4b-3f8dc7d42898] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.024148466s
addons_test.go:625: (dbg) Run:  kubectl --context addons-040821 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-040821 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-040821 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-040821 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.99086104s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (66.45s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-040821 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-040821 --alsologtostderr -v=1: (1.973089205s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
2023/11/08 23:38:18 [DEBUG] GET http://192.168.39.182:5000
helpers_test.go:344: "headlamp-777fd4b855-kp7dm" [8fd1e7f5-79af-4dd7-96d3-930781c9561d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-kp7dm" [8fd1e7f5-79af-4dd7-96d3-930781c9561d] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.014608565s
--- PASS: TestAddons/parallel/Headlamp (14.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-9pl7g" [96c8ef95-d04b-4ced-aff2-c563024243ca] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011970565s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-040821
--- PASS: TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-040821 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-040821 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-040821 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e308220f-fd24-41dd-9602-5b34072e992d] Pending
helpers_test.go:344: "test-local-path" [e308220f-fd24-41dd-9602-5b34072e992d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e308220f-fd24-41dd-9602-5b34072e992d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e308220f-fd24-41dd-9602-5b34072e992d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.037864239s
addons_test.go:890: (dbg) Run:  kubectl --context addons-040821 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 ssh "cat /opt/local-path-provisioner/pvc-944c92dd-4e60-4a93-b658-8db0d39dcad3_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-040821 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-040821 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-040821 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-040821 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.963046554s)
--- PASS: TestAddons/parallel/LocalPath (53.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.04s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-98ktr" [c6b23a94-b1ec-40ea-9ca8-ed07c5c38130] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.280689326s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-040821
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.04s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-040821 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-040821 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (102.66s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-040821
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-040821: (1m42.310938255s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-040821
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-040821
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-040821
--- PASS: TestAddons/StoppedEnableDisable (102.66s)

                                                
                                    
x
+
TestCertOptions (96.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-869913 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-869913 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m34.806795791s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-869913 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-869913 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-869913 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-869913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-869913
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-869913: (1.20400851s)
--- PASS: TestCertOptions (96.58s)

                                                
                                    
x
+
TestCertExpiration (341.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-442760 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-442760 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (2m19.618059952s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-442760 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-442760 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (20.426292186s)
helpers_test.go:175: Cleaning up "cert-expiration-442760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-442760
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-442760: (1.439153453s)
--- PASS: TestCertExpiration (341.48s)

                                                
                                    
x
+
TestForceSystemdFlag (107.35s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-261120 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E1109 00:18:04.432574  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-261120 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m45.927330923s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-261120 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-261120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-261120
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-261120: (1.17596471s)
--- PASS: TestForceSystemdFlag (107.35s)

                                                
                                    
x
+
TestForceSystemdEnv (70.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-639540 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-639540 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m9.0678557s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-639540 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-639540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-639540
--- PASS: TestForceSystemdEnv (70.39s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.79s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.79s)

                                                
                                    
x
+
TestErrorSpam/start (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 start --dry-run
--- PASS: TestErrorSpam/start (0.42s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (2.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 stop: (2.109651022s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-764351 --log_dir /tmp/nospam-764351 stop
--- PASS: TestErrorSpam/stop (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17586-201782/.minikube/files/etc/test/nested/copy/208963/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-400359 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E1108 23:43:04.432547  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:04.438493  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:04.448841  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:04.469220  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:04.509625  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:04.590078  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:04.750550  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:05.071206  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:05.712320  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:06.992858  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:09.553889  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:14.674895  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:43:24.916075  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-400359 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m21.191598303s)
--- PASS: TestFunctional/serial/StartWithProxy (81.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-400359 --alsologtostderr -v=8
E1108 23:43:45.397298  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-400359 --alsologtostderr -v=8: (6.731137513s)
functional_test.go:659: soft start took 6.73195314s for "functional-400359" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.73s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-400359 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 cache add registry.k8s.io/pause:3.1: (1.283055668s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 cache add registry.k8s.io/pause:3.3: (1.459024431s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 cache add registry.k8s.io/pause:latest: (1.347964248s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-400359 /tmp/TestFunctionalserialCacheCmdcacheadd_local1247839868/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 cache add minikube-local-cache-test:functional-400359
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 cache add minikube-local-cache-test:functional-400359: (1.44549776s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 cache delete minikube-local-cache-test:functional-400359
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-400359
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (251.498945ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 cache reload: (1.471242955s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 kubectl -- --context functional-400359 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-400359 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 logs: (1.428799206s)
--- PASS: TestFunctional/serial/LogsCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 config get cpus: exit status 14 (91.693056ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 config get cpus: exit status 14 (70.514086ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-400359 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-400359 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 216390: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-400359 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-400359 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (162.416248ms)

                                                
                                                
-- stdout --
	* [functional-400359] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 23:45:27.802270  215949 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:45:27.802433  215949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:27.802447  215949 out.go:309] Setting ErrFile to fd 2...
	I1108 23:45:27.802454  215949 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:27.802631  215949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
	I1108 23:45:27.803220  215949 out.go:303] Setting JSON to false
	I1108 23:45:27.804326  215949 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":23282,"bootTime":1699463846,"procs":275,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 23:45:27.804401  215949 start.go:138] virtualization: kvm guest
	I1108 23:45:27.806707  215949 out.go:177] * [functional-400359] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1108 23:45:27.808215  215949 out.go:177]   - MINIKUBE_LOCATION=17586
	I1108 23:45:27.809495  215949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:45:27.808302  215949 notify.go:220] Checking for updates...
	I1108 23:45:27.812297  215949 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:45:27.813782  215949 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	I1108 23:45:27.815302  215949 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 23:45:27.816623  215949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 23:45:27.818406  215949 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:45:27.818891  215949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:45:27.818948  215949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:45:27.836803  215949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41413
	I1108 23:45:27.837279  215949 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:45:27.838077  215949 main.go:141] libmachine: Using API Version  1
	I1108 23:45:27.838125  215949 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:45:27.838561  215949 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:45:27.838780  215949 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:45:27.839134  215949 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:45:27.839603  215949 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:45:27.839658  215949 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:45:27.856184  215949 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I1108 23:45:27.856771  215949 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:45:27.857373  215949 main.go:141] libmachine: Using API Version  1
	I1108 23:45:27.857398  215949 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:45:27.857810  215949 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:45:27.858022  215949 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:45:27.894333  215949 out.go:177] * Using the kvm2 driver based on existing profile
	I1108 23:45:27.895665  215949 start.go:298] selected driver: kvm2
	I1108 23:45:27.895684  215949 start.go:902] validating driver "kvm2" against &{Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400
359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:45:27.895883  215949 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 23:45:27.898784  215949 out.go:177] 
	W1108 23:45:27.900419  215949 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1108 23:45:27.901933  215949 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-400359 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-400359 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-400359 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (218.291055ms)

                                                
                                                
-- stdout --
	* [functional-400359] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 23:45:28.181364  216009 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:45:28.181659  216009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:28.181722  216009 out.go:309] Setting ErrFile to fd 2...
	I1108 23:45:28.181743  216009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:45:28.182223  216009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
	I1108 23:45:28.183068  216009 out.go:303] Setting JSON to false
	I1108 23:45:28.184584  216009 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":23282,"bootTime":1699463846,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 23:45:28.184714  216009 start.go:138] virtualization: kvm guest
	I1108 23:45:28.187978  216009 out.go:177] * [functional-400359] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1108 23:45:28.190471  216009 out.go:177]   - MINIKUBE_LOCATION=17586
	I1108 23:45:28.190490  216009 notify.go:220] Checking for updates...
	I1108 23:45:28.196276  216009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 23:45:28.198113  216009 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1108 23:45:28.199761  216009 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	I1108 23:45:28.201438  216009 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 23:45:28.203109  216009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 23:45:28.205234  216009 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:45:28.205722  216009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:45:28.205805  216009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:45:28.231008  216009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I1108 23:45:28.231591  216009 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:45:28.232554  216009 main.go:141] libmachine: Using API Version  1
	I1108 23:45:28.232581  216009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:45:28.233382  216009 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:45:28.233723  216009 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:45:28.234082  216009 driver.go:378] Setting default libvirt URI to qemu:///system
	I1108 23:45:28.235000  216009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:45:28.235098  216009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:45:28.261720  216009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I1108 23:45:28.262307  216009 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:45:28.263085  216009 main.go:141] libmachine: Using API Version  1
	I1108 23:45:28.263118  216009 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:45:28.263570  216009 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:45:28.263741  216009 main.go:141] libmachine: (functional-400359) Calling .DriverName
	I1108 23:45:28.306495  216009 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1108 23:45:28.308140  216009 start.go:298] selected driver: kvm2
	I1108 23:45:28.308161  216009 start.go:902] validating driver "kvm2" against &{Name:functional-400359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.32.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-400
359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStri
ng:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1108 23:45:28.308267  216009 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 23:45:28.310773  216009 out.go:177] 
	W1108 23:45:28.312366  216009 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1108 23:45:28.314104  216009 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (40.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-400359 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-400359 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-vxcjx" [2a54c345-70c8-4505-8cc4-48821fca97ba] Pending
helpers_test.go:344: "hello-node-connect-55497b8b78-vxcjx" [2a54c345-70c8-4505-8cc4-48821fca97ba] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-vxcjx" [2a54c345-70c8-4505-8cc4-48821fca97ba] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 32.014697782s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.189:31570
functional_test.go:1660: error fetching http://192.168.39.189:31570: Get "http://192.168.39.189:31570": dial tcp 192.168.39.189:31570: connect: connection refused
functional_test.go:1660: error fetching http://192.168.39.189:31570: Get "http://192.168.39.189:31570": dial tcp 192.168.39.189:31570: connect: connection refused
functional_test.go:1660: error fetching http://192.168.39.189:31570: Get "http://192.168.39.189:31570": dial tcp 192.168.39.189:31570: connect: connection refused
functional_test.go:1660: error fetching http://192.168.39.189:31570: Get "http://192.168.39.189:31570": dial tcp 192.168.39.189:31570: connect: connection refused
functional_test.go:1674: http://192.168.39.189:31570: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-vxcjx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.189:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.189:31570
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (40.18s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (111.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [01aed977-1439-433c-b8b1-869c92fcd9e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.025564113s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-400359 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-400359 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-400359 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-400359 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-400359 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-400359 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-400359 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-400359 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-400359 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-400359 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-400359 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-400359 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8f9a549c-c3b7-4da2-b2f1-8b7ea411395c] Pending
helpers_test.go:344: "sp-pod" [8f9a549c-c3b7-4da2-b2f1-8b7ea411395c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8f9a549c-c3b7-4da2-b2f1-8b7ea411395c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.026471038s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-400359 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-400359 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-400359 delete -f testdata/storage-provisioner/pod.yaml: (1.334673784s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-400359 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5645bab5-a184-421f-9a94-621d66a0736d] Pending
helpers_test.go:344: "sp-pod" [5645bab5-a184-421f-9a94-621d66a0736d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5645bab5-a184-421f-9a94-621d66a0736d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.026597415s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-400359 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (111.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh -n functional-400359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 cp functional-400359:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2377440341/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh -n functional-400359 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/208963/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "sudo cat /etc/test/nested/copy/208963/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/208963.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "sudo cat /etc/ssl/certs/208963.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/208963.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "sudo cat /usr/share/ca-certificates/208963.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2089632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "sudo cat /etc/ssl/certs/2089632.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2089632.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "sudo cat /usr/share/ca-certificates/2089632.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 ssh "sudo systemctl is-active docker": exit status 1 (327.855514ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 ssh "sudo systemctl is-active crio": exit status 1 (324.940705ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-400359 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-400359
docker.io/library/minikube-local-cache-test:functional-400359
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-400359 image ls --format short --alsologtostderr:
I1108 23:45:52.951407  217055 out.go:296] Setting OutFile to fd 1 ...
I1108 23:45:52.951697  217055 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:52.951709  217055 out.go:309] Setting ErrFile to fd 2...
I1108 23:45:52.951716  217055 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:52.951978  217055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
I1108 23:45:52.952617  217055 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:52.952731  217055 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:52.953249  217055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1108 23:45:52.953307  217055 main.go:141] libmachine: Launching plugin server for driver kvm2
I1108 23:45:52.968637  217055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
I1108 23:45:52.969242  217055 main.go:141] libmachine: () Calling .GetVersion
I1108 23:45:52.969913  217055 main.go:141] libmachine: Using API Version  1
I1108 23:45:52.969949  217055 main.go:141] libmachine: () Calling .SetConfigRaw
I1108 23:45:52.970345  217055 main.go:141] libmachine: () Calling .GetMachineName
I1108 23:45:52.970565  217055 main.go:141] libmachine: (functional-400359) Calling .GetState
I1108 23:45:52.972654  217055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1108 23:45:52.972706  217055 main.go:141] libmachine: Launching plugin server for driver kvm2
I1108 23:45:52.988042  217055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
I1108 23:45:52.988517  217055 main.go:141] libmachine: () Calling .GetVersion
I1108 23:45:52.989123  217055 main.go:141] libmachine: Using API Version  1
I1108 23:45:52.989153  217055 main.go:141] libmachine: () Calling .SetConfigRaw
I1108 23:45:52.989498  217055 main.go:141] libmachine: () Calling .GetMachineName
I1108 23:45:52.989698  217055 main.go:141] libmachine: (functional-400359) Calling .DriverName
I1108 23:45:52.989954  217055 ssh_runner.go:195] Run: systemctl --version
I1108 23:45:52.989979  217055 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
I1108 23:45:52.993014  217055 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
I1108 23:45:52.993532  217055 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
I1108 23:45:52.993569  217055 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
I1108 23:45:52.993741  217055 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
I1108 23:45:52.993968  217055 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
I1108 23:45:52.994127  217055 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
I1108 23:45:52.994286  217055 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
I1108 23:45:53.099748  217055 ssh_runner.go:195] Run: sudo crictl images --output json
I1108 23:45:53.161049  217055 main.go:141] libmachine: Making call to close driver server
I1108 23:45:53.161065  217055 main.go:141] libmachine: (functional-400359) Calling .Close
I1108 23:45:53.161458  217055 main.go:141] libmachine: (functional-400359) DBG | Closing plugin on server side
I1108 23:45:53.161563  217055 main.go:141] libmachine: Successfully made call to close driver server
I1108 23:45:53.161604  217055 main.go:141] libmachine: Making call to close connection to plugin binary
I1108 23:45:53.161627  217055 main.go:141] libmachine: Making call to close driver server
I1108 23:45:53.161640  217055 main.go:141] libmachine: (functional-400359) Calling .Close
I1108 23:45:53.161903  217055 main.go:141] libmachine: Successfully made call to close driver server
I1108 23:45:53.161918  217055 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-400359 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.3            | sha256:bfc896 | 24.6MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/minikube-local-cache-test | functional-400359  | sha256:fc358a | 1.01kB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-scheduler              | v1.28.3            | sha256:6d1b4f | 18.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| gcr.io/google-containers/addon-resizer      | functional-400359  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/kube-apiserver              | v1.28.3            | sha256:537434 | 34.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-controller-manager     | v1.28.3            | sha256:10baa1 | 33.4MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-400359 image ls --format table --alsologtostderr:
I1108 23:45:54.047474  217180 out.go:296] Setting OutFile to fd 1 ...
I1108 23:45:54.047811  217180 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:54.047822  217180 out.go:309] Setting ErrFile to fd 2...
I1108 23:45:54.047827  217180 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:54.048025  217180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
I1108 23:45:54.048661  217180 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:54.048785  217180 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:54.049166  217180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1108 23:45:54.049218  217180 main.go:141] libmachine: Launching plugin server for driver kvm2
I1108 23:45:54.064806  217180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
I1108 23:45:54.065381  217180 main.go:141] libmachine: () Calling .GetVersion
I1108 23:45:54.066096  217180 main.go:141] libmachine: Using API Version  1
I1108 23:45:54.066133  217180 main.go:141] libmachine: () Calling .SetConfigRaw
I1108 23:45:54.066549  217180 main.go:141] libmachine: () Calling .GetMachineName
I1108 23:45:54.066784  217180 main.go:141] libmachine: (functional-400359) Calling .GetState
I1108 23:45:54.068744  217180 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1108 23:45:54.068831  217180 main.go:141] libmachine: Launching plugin server for driver kvm2
I1108 23:45:54.084688  217180 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43519
I1108 23:45:54.085211  217180 main.go:141] libmachine: () Calling .GetVersion
I1108 23:45:54.085915  217180 main.go:141] libmachine: Using API Version  1
I1108 23:45:54.085956  217180 main.go:141] libmachine: () Calling .SetConfigRaw
I1108 23:45:54.086312  217180 main.go:141] libmachine: () Calling .GetMachineName
I1108 23:45:54.086607  217180 main.go:141] libmachine: (functional-400359) Calling .DriverName
I1108 23:45:54.086842  217180 ssh_runner.go:195] Run: systemctl --version
I1108 23:45:54.086879  217180 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
I1108 23:45:54.090162  217180 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
I1108 23:45:54.090637  217180 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
I1108 23:45:54.090685  217180 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
I1108 23:45:54.090885  217180 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
I1108 23:45:54.091112  217180 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
I1108 23:45:54.091309  217180 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
I1108 23:45:54.091462  217180 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
I1108 23:45:54.189368  217180 ssh_runner.go:195] Run: sudo crictl images --output json
I1108 23:45:54.287794  217180 main.go:141] libmachine: Making call to close driver server
I1108 23:45:54.287819  217180 main.go:141] libmachine: (functional-400359) Calling .Close
I1108 23:45:54.288176  217180 main.go:141] libmachine: Successfully made call to close driver server
I1108 23:45:54.288196  217180 main.go:141] libmachine: Making call to close connection to plugin binary
I1108 23:45:54.288205  217180 main.go:141] libmachine: Making call to close driver server
I1108 23:45:54.288214  217180 main.go:141] libmachine: (functional-400359) Calling .Close
I1108 23:45:54.288215  217180 main.go:141] libmachine: (functional-400359) DBG | Closing plugin on server side
I1108 23:45:54.288525  217180 main.go:141] libmachine: (functional-400359) DBG | Closing plugin on server side
I1108 23:45:54.288536  217180 main.go:141] libmachine: Successfully made call to close driver server
I1108 23:45:54.288551  217180 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-400359 image ls --format json --alsologtostderr:
[{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:fc358a50a9d9ce84dbc5e58d2083783a64d68a477617ddf0cc6516d4d1eef9c4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-400359"],"size":"1007"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-400359"],"size":"10823156"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.i
o/pause:latest"],"size":"72306"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:6e38f40d628db3002f5617342c8872c93
5de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":["registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"34666616"},{"id":"sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707"],"repoTags":["registry.k8s.io/kube-controller-m
anager:v1.28.3"],"size":"33404036"},{"id":"sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"18815674"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":["registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repo
Tags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"24561096"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-400359 image ls --format json --alsologtostderr:
I1108 23:45:53.772185  217139 out.go:296] Setting OutFile to fd 1 ...
I1108 23:45:53.772399  217139 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:53.772408  217139 out.go:309] Setting ErrFile to fd 2...
I1108 23:45:53.772416  217139 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:53.772746  217139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
I1108 23:45:53.773640  217139 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:53.773848  217139 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:53.774449  217139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1108 23:45:53.774516  217139 main.go:141] libmachine: Launching plugin server for driver kvm2
I1108 23:45:53.790084  217139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
I1108 23:45:53.790716  217139 main.go:141] libmachine: () Calling .GetVersion
I1108 23:45:53.791355  217139 main.go:141] libmachine: Using API Version  1
I1108 23:45:53.791374  217139 main.go:141] libmachine: () Calling .SetConfigRaw
I1108 23:45:53.791787  217139 main.go:141] libmachine: () Calling .GetMachineName
I1108 23:45:53.791958  217139 main.go:141] libmachine: (functional-400359) Calling .GetState
I1108 23:45:53.793574  217139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1108 23:45:53.793612  217139 main.go:141] libmachine: Launching plugin server for driver kvm2
I1108 23:45:53.807706  217139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
I1108 23:45:53.808106  217139 main.go:141] libmachine: () Calling .GetVersion
I1108 23:45:53.808643  217139 main.go:141] libmachine: Using API Version  1
I1108 23:45:53.808661  217139 main.go:141] libmachine: () Calling .SetConfigRaw
I1108 23:45:53.808992  217139 main.go:141] libmachine: () Calling .GetMachineName
I1108 23:45:53.809217  217139 main.go:141] libmachine: (functional-400359) Calling .DriverName
I1108 23:45:53.809522  217139 ssh_runner.go:195] Run: systemctl --version
I1108 23:45:53.809557  217139 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
I1108 23:45:53.812406  217139 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
I1108 23:45:53.812883  217139 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
I1108 23:45:53.812923  217139 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
I1108 23:45:53.813074  217139 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
I1108 23:45:53.813232  217139 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
I1108 23:45:53.813387  217139 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
I1108 23:45:53.813643  217139 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
I1108 23:45:53.910227  217139 ssh_runner.go:195] Run: sudo crictl images --output json
I1108 23:45:53.978339  217139 main.go:141] libmachine: Making call to close driver server
I1108 23:45:53.978354  217139 main.go:141] libmachine: (functional-400359) Calling .Close
I1108 23:45:53.978675  217139 main.go:141] libmachine: Successfully made call to close driver server
I1108 23:45:53.978697  217139 main.go:141] libmachine: (functional-400359) DBG | Closing plugin on server side
I1108 23:45:53.978707  217139 main.go:141] libmachine: Making call to close connection to plugin binary
I1108 23:45:53.978723  217139 main.go:141] libmachine: Making call to close driver server
I1108 23:45:53.978733  217139 main.go:141] libmachine: (functional-400359) Calling .Close
I1108 23:45:53.978997  217139 main.go:141] libmachine: Successfully made call to close driver server
I1108 23:45:53.979020  217139 main.go:141] libmachine: Making call to close connection to plugin binary
I1108 23:45:53.979019  217139 main.go:141] libmachine: (functional-400359) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image ls --format yaml --alsologtostderr
2023/11/08 23:45:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-400359 image ls --format yaml --alsologtostderr:
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "34666616"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "24561096"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:fc358a50a9d9ce84dbc5e58d2083783a64d68a477617ddf0cc6516d4d1eef9c4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-400359
size: "1007"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-400359
size: "10823156"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "18815674"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "33404036"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-400359 image ls --format yaml --alsologtostderr:
I1108 23:45:53.240771  217079 out.go:296] Setting OutFile to fd 1 ...
I1108 23:45:53.240956  217079 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:53.240970  217079 out.go:309] Setting ErrFile to fd 2...
I1108 23:45:53.240977  217079 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:53.241237  217079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
I1108 23:45:53.242002  217079 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:53.242146  217079 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:53.242565  217079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1108 23:45:53.242636  217079 main.go:141] libmachine: Launching plugin server for driver kvm2
I1108 23:45:53.258038  217079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36379
I1108 23:45:53.258596  217079 main.go:141] libmachine: () Calling .GetVersion
I1108 23:45:53.259291  217079 main.go:141] libmachine: Using API Version  1
I1108 23:45:53.259322  217079 main.go:141] libmachine: () Calling .SetConfigRaw
I1108 23:45:53.259725  217079 main.go:141] libmachine: () Calling .GetMachineName
I1108 23:45:53.259973  217079 main.go:141] libmachine: (functional-400359) Calling .GetState
I1108 23:45:53.261988  217079 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1108 23:45:53.262038  217079 main.go:141] libmachine: Launching plugin server for driver kvm2
I1108 23:45:53.277673  217079 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
I1108 23:45:53.278128  217079 main.go:141] libmachine: () Calling .GetVersion
I1108 23:45:53.278642  217079 main.go:141] libmachine: Using API Version  1
I1108 23:45:53.278667  217079 main.go:141] libmachine: () Calling .SetConfigRaw
I1108 23:45:53.279164  217079 main.go:141] libmachine: () Calling .GetMachineName
I1108 23:45:53.279390  217079 main.go:141] libmachine: (functional-400359) Calling .DriverName
I1108 23:45:53.279642  217079 ssh_runner.go:195] Run: systemctl --version
I1108 23:45:53.279670  217079 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
I1108 23:45:53.283074  217079 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
I1108 23:45:53.283539  217079 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
I1108 23:45:53.283569  217079 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
I1108 23:45:53.283749  217079 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
I1108 23:45:53.283975  217079 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
I1108 23:45:53.284121  217079 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
I1108 23:45:53.284294  217079 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
I1108 23:45:53.400783  217079 ssh_runner.go:195] Run: sudo crictl images --output json
I1108 23:45:53.447373  217079 main.go:141] libmachine: Making call to close driver server
I1108 23:45:53.447396  217079 main.go:141] libmachine: (functional-400359) Calling .Close
I1108 23:45:53.447803  217079 main.go:141] libmachine: (functional-400359) DBG | Closing plugin on server side
I1108 23:45:53.447804  217079 main.go:141] libmachine: Successfully made call to close driver server
I1108 23:45:53.447847  217079 main.go:141] libmachine: Making call to close connection to plugin binary
I1108 23:45:53.447867  217079 main.go:141] libmachine: Making call to close driver server
I1108 23:45:53.447879  217079 main.go:141] libmachine: (functional-400359) Calling .Close
I1108 23:45:53.448151  217079 main.go:141] libmachine: Successfully made call to close driver server
I1108 23:45:53.448171  217079 main.go:141] libmachine: (functional-400359) DBG | Closing plugin on server side
I1108 23:45:53.448173  217079 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 ssh pgrep buildkitd: exit status 1 (241.153908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image build -t localhost/my-image:functional-400359 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 image build -t localhost/my-image:functional-400359 testdata/build --alsologtostderr: (3.230494286s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-400359 image build -t localhost/my-image:functional-400359 testdata/build --alsologtostderr:
I1108 23:45:53.766694  217133 out.go:296] Setting OutFile to fd 1 ...
I1108 23:45:53.766899  217133 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:53.766911  217133 out.go:309] Setting ErrFile to fd 2...
I1108 23:45:53.766918  217133 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1108 23:45:53.767129  217133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
I1108 23:45:53.767851  217133 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:53.768667  217133 config.go:182] Loaded profile config "functional-400359": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1108 23:45:53.769088  217133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1108 23:45:53.769175  217133 main.go:141] libmachine: Launching plugin server for driver kvm2
I1108 23:45:53.787775  217133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36673
I1108 23:45:53.788374  217133 main.go:141] libmachine: () Calling .GetVersion
I1108 23:45:53.789054  217133 main.go:141] libmachine: Using API Version  1
I1108 23:45:53.789083  217133 main.go:141] libmachine: () Calling .SetConfigRaw
I1108 23:45:53.789522  217133 main.go:141] libmachine: () Calling .GetMachineName
I1108 23:45:53.789714  217133 main.go:141] libmachine: (functional-400359) Calling .GetState
I1108 23:45:53.792062  217133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I1108 23:45:53.792105  217133 main.go:141] libmachine: Launching plugin server for driver kvm2
I1108 23:45:53.807066  217133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37203
I1108 23:45:53.807540  217133 main.go:141] libmachine: () Calling .GetVersion
I1108 23:45:53.808077  217133 main.go:141] libmachine: Using API Version  1
I1108 23:45:53.808099  217133 main.go:141] libmachine: () Calling .SetConfigRaw
I1108 23:45:53.808468  217133 main.go:141] libmachine: () Calling .GetMachineName
I1108 23:45:53.808626  217133 main.go:141] libmachine: (functional-400359) Calling .DriverName
I1108 23:45:53.808816  217133 ssh_runner.go:195] Run: systemctl --version
I1108 23:45:53.808843  217133 main.go:141] libmachine: (functional-400359) Calling .GetSSHHostname
I1108 23:45:53.812264  217133 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined MAC address 52:54:00:ad:41:bc in network mk-functional-400359
I1108 23:45:53.812690  217133 main.go:141] libmachine: (functional-400359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:41:bc", ip: ""} in network mk-functional-400359: {Iface:virbr1 ExpiryTime:2023-11-09 00:42:39 +0000 UTC Type:0 Mac:52:54:00:ad:41:bc Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-400359 Clientid:01:52:54:00:ad:41:bc}
I1108 23:45:53.812726  217133 main.go:141] libmachine: (functional-400359) DBG | domain functional-400359 has defined IP address 192.168.39.189 and MAC address 52:54:00:ad:41:bc in network mk-functional-400359
I1108 23:45:53.812840  217133 main.go:141] libmachine: (functional-400359) Calling .GetSSHPort
I1108 23:45:53.813133  217133 main.go:141] libmachine: (functional-400359) Calling .GetSSHKeyPath
I1108 23:45:53.813325  217133 main.go:141] libmachine: (functional-400359) Calling .GetSSHUsername
I1108 23:45:53.813466  217133 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/functional-400359/id_rsa Username:docker}
I1108 23:45:53.904854  217133 build_images.go:151] Building image from path: /tmp/build.1358773957.tar
I1108 23:45:53.904930  217133 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1108 23:45:53.915577  217133 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1358773957.tar
I1108 23:45:53.922749  217133 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1358773957.tar: stat -c "%s %y" /var/lib/minikube/build/build.1358773957.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1358773957.tar': No such file or directory
I1108 23:45:53.922791  217133 ssh_runner.go:362] scp /tmp/build.1358773957.tar --> /var/lib/minikube/build/build.1358773957.tar (3072 bytes)
I1108 23:45:53.954123  217133 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1358773957
I1108 23:45:53.966683  217133 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1358773957 -xf /var/lib/minikube/build/build.1358773957.tar
I1108 23:45:53.981586  217133 containerd.go:378] Building image: /var/lib/minikube/build/build.1358773957
I1108 23:45:53.981657  217133 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1358773957 --local dockerfile=/var/lib/minikube/build/build.1358773957 --output type=image,name=localhost/my-image:functional-400359
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#4 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:0ffcfc6a95a78dfc9f097b87b040a9dc6a7ccb994dc3e3e4ec12a5e50330d358 0.0s done
#8 exporting config sha256:01f7852d94f04b47d30c727f8a9eea37979392611a0468f7e9ed76b6056fe9e3 0.0s done
#8 naming to localhost/my-image:functional-400359 done
#8 DONE 0.2s
I1108 23:45:56.880936  217133 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1358773957 --local dockerfile=/var/lib/minikube/build/build.1358773957 --output type=image,name=localhost/my-image:functional-400359: (2.899246071s)
I1108 23:45:56.881023  217133 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1358773957
I1108 23:45:56.904986  217133 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1358773957.tar
I1108 23:45:56.916696  217133 build_images.go:207] Built localhost/my-image:functional-400359 from /tmp/build.1358773957.tar
I1108 23:45:56.916741  217133 build_images.go:123] succeeded building to: functional-400359
I1108 23:45:56.916747  217133 build_images.go:124] failed building to: 
I1108 23:45:56.916797  217133 main.go:141] libmachine: Making call to close driver server
I1108 23:45:56.916815  217133 main.go:141] libmachine: (functional-400359) Calling .Close
I1108 23:45:56.917149  217133 main.go:141] libmachine: Successfully made call to close driver server
I1108 23:45:56.917172  217133 main.go:141] libmachine: Making call to close connection to plugin binary
I1108 23:45:56.917172  217133 main.go:141] libmachine: (functional-400359) DBG | Closing plugin on server side
I1108 23:45:56.917183  217133 main.go:141] libmachine: Making call to close driver server
I1108 23:45:56.917194  217133 main.go:141] libmachine: (functional-400359) Calling .Close
I1108 23:45:56.918927  217133 main.go:141] libmachine: (functional-400359) DBG | Closing plugin on server side
I1108 23:45:56.919184  217133 main.go:141] libmachine: Successfully made call to close driver server
I1108 23:45:56.919205  217133 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-400359
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image load --daemon gcr.io/google-containers/addon-resizer:functional-400359 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 image load --daemon gcr.io/google-containers/addon-resizer:functional-400359 --alsologtostderr: (5.366122678s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image load --daemon gcr.io/google-containers/addon-resizer:functional-400359 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 image load --daemon gcr.io/google-containers/addon-resizer:functional-400359 --alsologtostderr: (4.415250564s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "278.472022ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "84.74965ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "455.846197ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "79.348142ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (22.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-400359 /tmp/TestFunctionalparallelMountCmdany-port1406249928/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699487125844545570" to /tmp/TestFunctionalparallelMountCmdany-port1406249928/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699487125844545570" to /tmp/TestFunctionalparallelMountCmdany-port1406249928/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699487125844545570" to /tmp/TestFunctionalparallelMountCmdany-port1406249928/001/test-1699487125844545570
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (278.994056ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  8 23:45 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  8 23:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  8 23:45 test-1699487125844545570
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh cat /mount-9p/test-1699487125844545570
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-400359 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ab2dd8e6-d16c-4657-bc9a-fe9260543756] Pending
helpers_test.go:344: "busybox-mount" [ab2dd8e6-d16c-4657-bc9a-fe9260543756] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ab2dd8e6-d16c-4657-bc9a-fe9260543756] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ab2dd8e6-d16c-4657-bc9a-fe9260543756] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 19.031119607s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-400359 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-400359 /tmp/TestFunctionalparallelMountCmdany-port1406249928/001:/mount-9p --alsologtostderr -v=1] ...
E1108 23:45:48.278727  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/MountCmd/any-port (22.60s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-400359
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image load --daemon gcr.io/google-containers/addon-resizer:functional-400359 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 image load --daemon gcr.io/google-containers/addon-resizer:functional-400359 --alsologtostderr: (3.922444434s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image save gcr.io/google-containers/addon-resizer:functional-400359 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 image save gcr.io/google-containers/addon-resizer:functional-400359 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.118080081s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image rm gcr.io/google-containers/addon-resizer:functional-400359 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.236164038s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-400359
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 image save --daemon gcr.io/google-containers/addon-resizer:functional-400359 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-400359 image save --daemon gcr.io/google-containers/addon-resizer:functional-400359 --alsologtostderr: (1.125480255s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-400359
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-400359 /tmp/TestFunctionalparallelMountCmdspecific-port2297079264/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (262.94688ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-400359 /tmp/TestFunctionalparallelMountCmdspecific-port2297079264/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 ssh "sudo umount -f /mount-9p": exit status 1 (267.57119ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-400359 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-400359 /tmp/TestFunctionalparallelMountCmdspecific-port2297079264/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-400359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590747690/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-400359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590747690/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-400359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590747690/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-400359 ssh "findmnt -T" /mount1: exit status 1 (250.58246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-400359 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-400359 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-400359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590747690/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-400359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590747690/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-400359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup590747690/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.47s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-400359
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-400359
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-400359
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (84.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-856841 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E1108 23:48:04.431599  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:48:32.119552  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-856841 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m24.733708004s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (84.73s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.14s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-856841 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-856841 addons enable ingress --alsologtostderr -v=5: (11.143922767s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.14s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-856841 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.65s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (41.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-856841 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-856841 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.049240995s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-856841 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-856841 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [387058fc-088a-4bdb-9f58-e66ab52f420c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [387058fc-088a-4bdb-9f58-e66ab52f420c] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.019255104s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-856841 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-856841 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-856841 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.157
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-856841 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-856841 addons disable ingress-dns --alsologtostderr -v=1: (12.562521086s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-856841 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-856841 addons disable ingress --alsologtostderr -v=1: (7.652499101s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (41.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-889502 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E1108 23:50:23.687181  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:23.692573  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:23.702960  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:23.723334  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:23.763737  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:23.844166  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:24.004674  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:24.325334  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:24.966459  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:26.246997  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:28.807908  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:33.929005  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:50:44.170072  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-889502 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m21.115189424s)
--- PASS: TestJSONOutput/start/Command (81.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-889502 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-889502 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-889502 --output=json --user=testUser
E1108 23:51:04.651019  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-889502 --output=json --user=testUser: (7.116397118s)
--- PASS: TestJSONOutput/stop/Command (7.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-412148 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-412148 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.574799ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7c88e60f-69fb-4b2d-b611-8ca640d0bc16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-412148] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"499edb1d-8b40-47d0-a1a7-ef75b5bbdbe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17586"}}
	{"specversion":"1.0","id":"12890795-d2d0-4940-a785-943794c82e83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"850b217f-cd93-47a3-a490-916f609dfa4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig"}}
	{"specversion":"1.0","id":"8e47a5ee-be10-4a77-8796-41c6f014a620","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube"}}
	{"specversion":"1.0","id":"3b06a923-e44c-4364-a60f-2c37eae0e017","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2ede493a-9fdd-41bc-a30d-f81b013aa70a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2bc28961-d63a-415d-80c4-8801c5fbd7f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-412148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-412148
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (132.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-792008 --driver=kvm2  --container-runtime=containerd
E1108 23:51:45.612089  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-792008 --driver=kvm2  --container-runtime=containerd: (1m3.045951388s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-794447 --driver=kvm2  --container-runtime=containerd
E1108 23:53:04.432386  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1108 23:53:07.534361  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-794447 --driver=kvm2  --container-runtime=containerd: (1m6.199968153s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-792008
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-794447
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-794447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-794447
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-794447: (1.068443809s)
helpers_test.go:175: Cleaning up "first-792008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-792008
--- PASS: TestMinikubeProfile (132.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-838035 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-838035 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.519501701s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-838035 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-838035 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-857254 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1108 23:53:52.926673  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:53:52.931986  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:53:52.942352  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:53:52.962724  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:53:53.003123  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:53:53.083524  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:53:53.244193  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:53:53.564865  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:53:54.205880  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:53:55.487020  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:53:58.047544  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:54:03.168149  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:54:13.408726  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-857254 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.053481586s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857254 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857254 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.44s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-838035 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-838035 --alsologtostderr -v=5: (1.198438084s)
--- PASS: TestMountStart/serial/DeleteFirst (1.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.51s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857254 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857254 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.51s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-857254
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-857254: (1.241329542s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-857254
E1108 23:54:33.889051  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-857254: (23.762030143s)
--- PASS: TestMountStart/serial/RestartStopped (24.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857254 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-857254 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-592243 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1108 23:55:14.849936  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:55:23.686782  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:55:51.374841  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1108 23:56:36.770240  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-592243 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m7.181176468s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-592243 -- rollout status deployment/busybox: (1.832950402s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- exec busybox-5bc68d56bd-2h6hz -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- exec busybox-5bc68d56bd-2spk8 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- exec busybox-5bc68d56bd-2h6hz -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- exec busybox-5bc68d56bd-2spk8 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- exec busybox-5bc68d56bd-2h6hz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- exec busybox-5bc68d56bd-2spk8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- exec busybox-5bc68d56bd-2h6hz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- exec busybox-5bc68d56bd-2h6hz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- exec busybox-5bc68d56bd-2spk8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-592243 -- exec busybox-5bc68d56bd-2spk8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-592243 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-592243 -v 3 --alsologtostderr: (42.437607166s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp testdata/cp-test.txt multinode-592243:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp multinode-592243:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3003071992/001/cp-test_multinode-592243.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp multinode-592243:/home/docker/cp-test.txt multinode-592243-m02:/home/docker/cp-test_multinode-592243_multinode-592243-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m02 "sudo cat /home/docker/cp-test_multinode-592243_multinode-592243-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp multinode-592243:/home/docker/cp-test.txt multinode-592243-m03:/home/docker/cp-test_multinode-592243_multinode-592243-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m03 "sudo cat /home/docker/cp-test_multinode-592243_multinode-592243-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp testdata/cp-test.txt multinode-592243-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp multinode-592243-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3003071992/001/cp-test_multinode-592243-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp multinode-592243-m02:/home/docker/cp-test.txt multinode-592243:/home/docker/cp-test_multinode-592243-m02_multinode-592243.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243 "sudo cat /home/docker/cp-test_multinode-592243-m02_multinode-592243.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp multinode-592243-m02:/home/docker/cp-test.txt multinode-592243-m03:/home/docker/cp-test_multinode-592243-m02_multinode-592243-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m03 "sudo cat /home/docker/cp-test_multinode-592243-m02_multinode-592243-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp testdata/cp-test.txt multinode-592243-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp multinode-592243-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3003071992/001/cp-test_multinode-592243-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp multinode-592243-m03:/home/docker/cp-test.txt multinode-592243:/home/docker/cp-test_multinode-592243-m03_multinode-592243.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243 "sudo cat /home/docker/cp-test_multinode-592243-m03_multinode-592243.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 cp multinode-592243-m03:/home/docker/cp-test.txt multinode-592243-m02:/home/docker/cp-test_multinode-592243-m03_multinode-592243-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 ssh -n multinode-592243-m02 "sudo cat /home/docker/cp-test_multinode-592243-m03_multinode-592243-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (12.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-592243 node stop m03: (11.455806099s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-592243 status: exit status 7 (472.239472ms)

                                                
                                                
-- stdout --
	multinode-592243
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-592243-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-592243-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-592243 status --alsologtostderr: exit status 7 (468.114131ms)

                                                
                                                
-- stdout --
	multinode-592243
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-592243-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-592243-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 23:58:02.814557  224051 out.go:296] Setting OutFile to fd 1 ...
	I1108 23:58:02.814712  224051 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:58:02.814721  224051 out.go:309] Setting ErrFile to fd 2...
	I1108 23:58:02.814725  224051 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1108 23:58:02.814960  224051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
	I1108 23:58:02.815184  224051 out.go:303] Setting JSON to false
	I1108 23:58:02.815222  224051 mustload.go:65] Loading cluster: multinode-592243
	I1108 23:58:02.815333  224051 notify.go:220] Checking for updates...
	I1108 23:58:02.815704  224051 config.go:182] Loaded profile config "multinode-592243": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1108 23:58:02.815722  224051 status.go:255] checking status of multinode-592243 ...
	I1108 23:58:02.816284  224051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:58:02.816379  224051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:58:02.832098  224051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40899
	I1108 23:58:02.832557  224051 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:58:02.833332  224051 main.go:141] libmachine: Using API Version  1
	I1108 23:58:02.833367  224051 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:58:02.833794  224051 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:58:02.834136  224051 main.go:141] libmachine: (multinode-592243) Calling .GetState
	I1108 23:58:02.835947  224051 status.go:330] multinode-592243 host status = "Running" (err=<nil>)
	I1108 23:58:02.835973  224051 host.go:66] Checking if "multinode-592243" exists ...
	I1108 23:58:02.836338  224051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:58:02.836409  224051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:58:02.851516  224051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I1108 23:58:02.851952  224051 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:58:02.852423  224051 main.go:141] libmachine: Using API Version  1
	I1108 23:58:02.852446  224051 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:58:02.852781  224051 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:58:02.852959  224051 main.go:141] libmachine: (multinode-592243) Calling .GetIP
	I1108 23:58:02.855874  224051 main.go:141] libmachine: (multinode-592243) DBG | domain multinode-592243 has defined MAC address 52:54:00:3b:99:1c in network mk-multinode-592243
	I1108 23:58:02.856288  224051 main.go:141] libmachine: (multinode-592243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:99:1c", ip: ""} in network mk-multinode-592243: {Iface:virbr1 ExpiryTime:2023-11-09 00:55:03 +0000 UTC Type:0 Mac:52:54:00:3b:99:1c Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-592243 Clientid:01:52:54:00:3b:99:1c}
	I1108 23:58:02.856314  224051 main.go:141] libmachine: (multinode-592243) DBG | domain multinode-592243 has defined IP address 192.168.39.198 and MAC address 52:54:00:3b:99:1c in network mk-multinode-592243
	I1108 23:58:02.856431  224051 host.go:66] Checking if "multinode-592243" exists ...
	I1108 23:58:02.856775  224051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:58:02.856820  224051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:58:02.871855  224051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37569
	I1108 23:58:02.872358  224051 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:58:02.872961  224051 main.go:141] libmachine: Using API Version  1
	I1108 23:58:02.872984  224051 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:58:02.873355  224051 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:58:02.873580  224051 main.go:141] libmachine: (multinode-592243) Calling .DriverName
	I1108 23:58:02.873859  224051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 23:58:02.873890  224051 main.go:141] libmachine: (multinode-592243) Calling .GetSSHHostname
	I1108 23:58:02.876928  224051 main.go:141] libmachine: (multinode-592243) DBG | domain multinode-592243 has defined MAC address 52:54:00:3b:99:1c in network mk-multinode-592243
	I1108 23:58:02.877338  224051 main.go:141] libmachine: (multinode-592243) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:99:1c", ip: ""} in network mk-multinode-592243: {Iface:virbr1 ExpiryTime:2023-11-09 00:55:03 +0000 UTC Type:0 Mac:52:54:00:3b:99:1c Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:multinode-592243 Clientid:01:52:54:00:3b:99:1c}
	I1108 23:58:02.877368  224051 main.go:141] libmachine: (multinode-592243) DBG | domain multinode-592243 has defined IP address 192.168.39.198 and MAC address 52:54:00:3b:99:1c in network mk-multinode-592243
	I1108 23:58:02.877506  224051 main.go:141] libmachine: (multinode-592243) Calling .GetSSHPort
	I1108 23:58:02.877679  224051 main.go:141] libmachine: (multinode-592243) Calling .GetSSHKeyPath
	I1108 23:58:02.877844  224051 main.go:141] libmachine: (multinode-592243) Calling .GetSSHUsername
	I1108 23:58:02.877984  224051 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/multinode-592243/id_rsa Username:docker}
	I1108 23:58:02.977527  224051 ssh_runner.go:195] Run: systemctl --version
	I1108 23:58:02.983380  224051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:58:02.999938  224051 kubeconfig.go:92] found "multinode-592243" server: "https://192.168.39.198:8443"
	I1108 23:58:02.999969  224051 api_server.go:166] Checking apiserver status ...
	I1108 23:58:03.000005  224051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 23:58:03.016348  224051 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1229/cgroup
	I1108 23:58:03.026826  224051 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/podc03d3ae4efb8467a95a49841f770f24c/e6a6de53feb38c16ee72b01b7bce2c5c1b89ad0c08d37c2b3f3bdebc2b822b16"
	I1108 23:58:03.026909  224051 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podc03d3ae4efb8467a95a49841f770f24c/e6a6de53feb38c16ee72b01b7bce2c5c1b89ad0c08d37c2b3f3bdebc2b822b16/freezer.state
	I1108 23:58:03.036607  224051 api_server.go:204] freezer state: "THAWED"
	I1108 23:58:03.036637  224051 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
	I1108 23:58:03.041783  224051 api_server.go:279] https://192.168.39.198:8443/healthz returned 200:
	ok
	I1108 23:58:03.041815  224051 status.go:421] multinode-592243 apiserver status = Running (err=<nil>)
	I1108 23:58:03.041824  224051 status.go:257] multinode-592243 status: &{Name:multinode-592243 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 23:58:03.041840  224051 status.go:255] checking status of multinode-592243-m02 ...
	I1108 23:58:03.042194  224051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:58:03.042232  224051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:58:03.057038  224051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41735
	I1108 23:58:03.057453  224051 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:58:03.057919  224051 main.go:141] libmachine: Using API Version  1
	I1108 23:58:03.057949  224051 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:58:03.058245  224051 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:58:03.058408  224051 main.go:141] libmachine: (multinode-592243-m02) Calling .GetState
	I1108 23:58:03.060024  224051 status.go:330] multinode-592243-m02 host status = "Running" (err=<nil>)
	I1108 23:58:03.060049  224051 host.go:66] Checking if "multinode-592243-m02" exists ...
	I1108 23:58:03.060441  224051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:58:03.060490  224051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:58:03.075306  224051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I1108 23:58:03.075713  224051 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:58:03.076122  224051 main.go:141] libmachine: Using API Version  1
	I1108 23:58:03.076144  224051 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:58:03.076477  224051 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:58:03.076684  224051 main.go:141] libmachine: (multinode-592243-m02) Calling .GetIP
	I1108 23:58:03.079351  224051 main.go:141] libmachine: (multinode-592243-m02) DBG | domain multinode-592243-m02 has defined MAC address 52:54:00:95:84:60 in network mk-multinode-592243
	I1108 23:58:03.079793  224051 main.go:141] libmachine: (multinode-592243-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:84:60", ip: ""} in network mk-multinode-592243: {Iface:virbr1 ExpiryTime:2023-11-09 00:56:25 +0000 UTC Type:0 Mac:52:54:00:95:84:60 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-592243-m02 Clientid:01:52:54:00:95:84:60}
	I1108 23:58:03.079823  224051 main.go:141] libmachine: (multinode-592243-m02) DBG | domain multinode-592243-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:95:84:60 in network mk-multinode-592243
	I1108 23:58:03.079928  224051 host.go:66] Checking if "multinode-592243-m02" exists ...
	I1108 23:58:03.080268  224051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:58:03.080313  224051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:58:03.094625  224051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I1108 23:58:03.095001  224051 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:58:03.095383  224051 main.go:141] libmachine: Using API Version  1
	I1108 23:58:03.095409  224051 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:58:03.095682  224051 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:58:03.095852  224051 main.go:141] libmachine: (multinode-592243-m02) Calling .DriverName
	I1108 23:58:03.096057  224051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 23:58:03.096077  224051 main.go:141] libmachine: (multinode-592243-m02) Calling .GetSSHHostname
	I1108 23:58:03.098901  224051 main.go:141] libmachine: (multinode-592243-m02) DBG | domain multinode-592243-m02 has defined MAC address 52:54:00:95:84:60 in network mk-multinode-592243
	I1108 23:58:03.099329  224051 main.go:141] libmachine: (multinode-592243-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:84:60", ip: ""} in network mk-multinode-592243: {Iface:virbr1 ExpiryTime:2023-11-09 00:56:25 +0000 UTC Type:0 Mac:52:54:00:95:84:60 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-592243-m02 Clientid:01:52:54:00:95:84:60}
	I1108 23:58:03.099364  224051 main.go:141] libmachine: (multinode-592243-m02) DBG | domain multinode-592243-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:95:84:60 in network mk-multinode-592243
	I1108 23:58:03.099496  224051 main.go:141] libmachine: (multinode-592243-m02) Calling .GetSSHPort
	I1108 23:58:03.099716  224051 main.go:141] libmachine: (multinode-592243-m02) Calling .GetSSHKeyPath
	I1108 23:58:03.099872  224051 main.go:141] libmachine: (multinode-592243-m02) Calling .GetSSHUsername
	I1108 23:58:03.100004  224051 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17586-201782/.minikube/machines/multinode-592243-m02/id_rsa Username:docker}
	I1108 23:58:03.182304  224051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 23:58:03.196385  224051 status.go:257] multinode-592243-m02 status: &{Name:multinode-592243-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1108 23:58:03.196435  224051 status.go:255] checking status of multinode-592243-m03 ...
	I1108 23:58:03.196798  224051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1108 23:58:03.196846  224051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1108 23:58:03.212275  224051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43825
	I1108 23:58:03.212791  224051 main.go:141] libmachine: () Calling .GetVersion
	I1108 23:58:03.213371  224051 main.go:141] libmachine: Using API Version  1
	I1108 23:58:03.213405  224051 main.go:141] libmachine: () Calling .SetConfigRaw
	I1108 23:58:03.213855  224051 main.go:141] libmachine: () Calling .GetMachineName
	I1108 23:58:03.214071  224051 main.go:141] libmachine: (multinode-592243-m03) Calling .GetState
	I1108 23:58:03.215872  224051 status.go:330] multinode-592243-m03 host status = "Stopped" (err=<nil>)
	I1108 23:58:03.215887  224051 status.go:343] host is not running, skipping remaining checks
	I1108 23:58:03.215895  224051 status.go:257] multinode-592243-m03 status: &{Name:multinode-592243-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (12.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 node start m03 --alsologtostderr
E1108 23:58:04.431878  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-592243 node start m03 --alsologtostderr: (27.522876604s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (332.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-592243
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-592243
E1108 23:58:52.926458  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:59:20.613331  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1108 23:59:27.482585  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1109 00:00:23.687013  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-592243: (3m15.185888166s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-592243 --wait=true -v=8 --alsologtostderr
E1109 00:03:04.431727  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1109 00:03:52.926810  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-592243 --wait=true -v=8 --alsologtostderr: (2m17.201163579s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-592243
--- PASS: TestMultiNode/serial/RestartKeepsNodes (332.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-592243 node delete m03: (1.348293235s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 stop
E1109 00:05:23.687196  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1109 00:06:46.737017  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-592243 stop: (3m3.318191286s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-592243 status: exit status 7 (115.371448ms)

                                                
                                                
-- stdout --
	multinode-592243
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-592243-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-592243 status --alsologtostderr: exit status 7 (106.403266ms)

                                                
                                                
-- stdout --
	multinode-592243
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-592243-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 00:07:09.419750  226247 out.go:296] Setting OutFile to fd 1 ...
	I1109 00:07:09.419878  226247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 00:07:09.419886  226247 out.go:309] Setting ErrFile to fd 2...
	I1109 00:07:09.419890  226247 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 00:07:09.420080  226247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
	I1109 00:07:09.420259  226247 out.go:303] Setting JSON to false
	I1109 00:07:09.420291  226247 mustload.go:65] Loading cluster: multinode-592243
	I1109 00:07:09.420413  226247 notify.go:220] Checking for updates...
	I1109 00:07:09.420703  226247 config.go:182] Loaded profile config "multinode-592243": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1109 00:07:09.420716  226247 status.go:255] checking status of multinode-592243 ...
	I1109 00:07:09.421118  226247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1109 00:07:09.421188  226247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1109 00:07:09.440423  226247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42475
	I1109 00:07:09.440916  226247 main.go:141] libmachine: () Calling .GetVersion
	I1109 00:07:09.441553  226247 main.go:141] libmachine: Using API Version  1
	I1109 00:07:09.441581  226247 main.go:141] libmachine: () Calling .SetConfigRaw
	I1109 00:07:09.441923  226247 main.go:141] libmachine: () Calling .GetMachineName
	I1109 00:07:09.442129  226247 main.go:141] libmachine: (multinode-592243) Calling .GetState
	I1109 00:07:09.444110  226247 status.go:330] multinode-592243 host status = "Stopped" (err=<nil>)
	I1109 00:07:09.444128  226247 status.go:343] host is not running, skipping remaining checks
	I1109 00:07:09.444134  226247 status.go:257] multinode-592243 status: &{Name:multinode-592243 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 00:07:09.444170  226247 status.go:255] checking status of multinode-592243-m02 ...
	I1109 00:07:09.444484  226247 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1109 00:07:09.444527  226247 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1109 00:07:09.459699  226247 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39701
	I1109 00:07:09.460199  226247 main.go:141] libmachine: () Calling .GetVersion
	I1109 00:07:09.460705  226247 main.go:141] libmachine: Using API Version  1
	I1109 00:07:09.460735  226247 main.go:141] libmachine: () Calling .SetConfigRaw
	I1109 00:07:09.461070  226247 main.go:141] libmachine: () Calling .GetMachineName
	I1109 00:07:09.461268  226247 main.go:141] libmachine: (multinode-592243-m02) Calling .GetState
	I1109 00:07:09.463094  226247 status.go:330] multinode-592243-m02 host status = "Stopped" (err=<nil>)
	I1109 00:07:09.463115  226247 status.go:343] host is not running, skipping remaining checks
	I1109 00:07:09.463122  226247 status.go:257] multinode-592243-m02 status: &{Name:multinode-592243-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (94.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-592243 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1109 00:08:04.432129  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-592243 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m33.63010098s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-592243 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (94.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (70.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-592243
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-592243-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-592243-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (89.49395ms)

                                                
                                                
-- stdout --
	* [multinode-592243-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-592243-m02' is duplicated with machine name 'multinode-592243-m02' in profile 'multinode-592243'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-592243-m03 --driver=kvm2  --container-runtime=containerd
E1109 00:08:52.926299  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-592243-m03 --driver=kvm2  --container-runtime=containerd: (1m8.902448177s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-592243
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-592243: exit status 80 (262.468313ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-592243
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-592243-m03 already exists in multinode-592243-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-592243-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (70.19s)

                                                
                                    
x
+
TestPreload (245.17s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-811901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E1109 00:10:15.974201  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1109 00:10:23.687466  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-811901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m24.168488883s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-811901 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-811901
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-811901: (1m32.107109259s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-811901 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E1109 00:13:04.432666  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1109 00:13:52.926772  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-811901 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m6.664384172s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-811901 image list
helpers_test.go:175: Cleaning up "test-preload-811901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-811901
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-811901: (1.096000114s)
--- PASS: TestPreload (245.17s)

                                                
                                    
x
+
TestScheduledStopUnix (139.96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-990520 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-990520 --memory=2048 --driver=kvm2  --container-runtime=containerd: (1m7.921565473s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-990520 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-990520 -n scheduled-stop-990520
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-990520 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-990520 --cancel-scheduled
E1109 00:15:23.686845  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-990520 -n scheduled-stop-990520
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-990520
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-990520 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1109 00:16:07.485784  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-990520
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-990520: exit status 7 (85.520965ms)

                                                
                                                
-- stdout --
	scheduled-stop-990520
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-990520 -n scheduled-stop-990520
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-990520 -n scheduled-stop-990520: exit status 7 (87.309008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-990520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-990520
--- PASS: TestScheduledStopUnix (139.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (183.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.3413109688.exe start -p running-upgrade-786338 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.3413109688.exe start -p running-upgrade-786338 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m46.795798933s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-786338 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-786338 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m15.227630095s)
helpers_test.go:175: Cleaning up "running-upgrade-786338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-786338
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-786338: (1.239491025s)
--- PASS: TestRunningBinaryUpgrade (183.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (232.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-402954 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1109 00:20:23.687068  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-402954 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m33.369528737s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-402954
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-402954: (2.1259485s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-402954 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-402954 status --format={{.Host}}: exit status 7 (130.898114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-402954 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-402954 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m6.22189918s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-402954 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-402954 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-402954 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (112.833368ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-402954] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-402954
	    minikube start -p kubernetes-upgrade-402954 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4029542 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-402954 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-402954 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1109 00:23:04.432095  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-402954 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m8.754826044s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-402954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-402954
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-402954: (1.462530884s)
--- PASS: TestKubernetesUpgrade (232.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-520091 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-520091 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (113.563499ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-520091] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (123.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-520091 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-520091 --driver=kvm2  --container-runtime=containerd: (2m3.463880753s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-520091 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (123.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-565009 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-565009 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (128.463196ms)

                                                
                                                
-- stdout --
	* [false-565009] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17586
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 00:16:24.578298  230297 out.go:296] Setting OutFile to fd 1 ...
	I1109 00:16:24.578495  230297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 00:16:24.578510  230297 out.go:309] Setting ErrFile to fd 2...
	I1109 00:16:24.578524  230297 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 00:16:24.578749  230297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17586-201782/.minikube/bin
	I1109 00:16:24.579394  230297 out.go:303] Setting JSON to false
	I1109 00:16:24.580535  230297 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":25139,"bootTime":1699463846,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1109 00:16:24.580608  230297 start.go:138] virtualization: kvm guest
	I1109 00:16:24.583329  230297 out.go:177] * [false-565009] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1109 00:16:24.585484  230297 out.go:177]   - MINIKUBE_LOCATION=17586
	I1109 00:16:24.585530  230297 notify.go:220] Checking for updates...
	I1109 00:16:24.587144  230297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 00:16:24.588888  230297 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17586-201782/kubeconfig
	I1109 00:16:24.590483  230297 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17586-201782/.minikube
	I1109 00:16:24.592064  230297 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1109 00:16:24.593519  230297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 00:16:24.595796  230297 config.go:182] Loaded profile config "NoKubernetes-520091": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1109 00:16:24.595923  230297 config.go:182] Loaded profile config "force-systemd-env-639540": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1109 00:16:24.596034  230297 config.go:182] Loaded profile config "offline-containerd-504034": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1109 00:16:24.596141  230297 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 00:16:24.635347  230297 out.go:177] * Using the kvm2 driver based on user configuration
	I1109 00:16:24.636916  230297 start.go:298] selected driver: kvm2
	I1109 00:16:24.636934  230297 start.go:902] validating driver "kvm2" against <nil>
	I1109 00:16:24.636945  230297 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 00:16:24.639225  230297 out.go:177] 
	W1109 00:16:24.640600  230297 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1109 00:16:24.642022  230297 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-565009 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-565009" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-565009" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-565009

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-565009"

                                                
                                                
----------------------- debugLogs end: false-565009 [took: 3.443420582s] --------------------------------
helpers_test.go:175: Cleaning up "false-565009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-565009
--- PASS: TestNetworkPlugins/group/false (3.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-520091 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-520091 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (24.148683073s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-520091 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-520091 status -o json: exit status 2 (303.971164ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-520091","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-520091
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-520091: (1.3810622s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-520091 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-520091 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (31.26900501s)
--- PASS: TestNoKubernetes/serial/Start (31.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-520091 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-520091 "sudo systemctl is-active --quiet service kubelet": exit status 1 (239.693257ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-520091
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-520091: (2.122891544s)
--- PASS: TestNoKubernetes/serial/Stop (2.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-520091 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-520091 --driver=kvm2  --container-runtime=containerd: (42.082899822s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-520091 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-520091 "sudo systemctl is-active --quiet service kubelet": exit status 1 (257.836259ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (166.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.779123201.exe start -p stopped-upgrade-035174 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.779123201.exe start -p stopped-upgrade-035174 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m26.503716113s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.779123201.exe -p stopped-upgrade-035174 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.779123201.exe -p stopped-upgrade-035174 stop: (2.171666515s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-035174 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-035174 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m17.991032055s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (166.67s)

                                                
                                    
x
+
TestPause/serial/Start (137.47s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-170065 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-170065 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m17.465723775s)
--- PASS: TestPause/serial/Start (137.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (126.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m6.606332316s)
--- PASS: TestNetworkPlugins/group/auto/Start (126.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-035174
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-035174: (1.509977043s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (103.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E1109 00:23:26.737687  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1109 00:23:52.927328  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m43.085996844s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (103.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (115.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m55.36176764s)
--- PASS: TestNetworkPlugins/group/calico/Start (115.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-565009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-565009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vhszp" [ab876329-f6d2-4fb3-bb74-81686cc66764] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vhszp" [ab876329-f6d2-4fb3-bb74-81686cc66764] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.015737647s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.86s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-170065 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-170065 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (7.841225929s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.86s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-170065 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-170065 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-170065 --output=json --layout=cluster: exit status 2 (324.743961ms)

                                                
                                                
-- stdout --
	{"Name":"pause-170065","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-170065","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (26.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-565009 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-565009 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.214340469s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-565009 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context auto-565009 exec deployment/netcat -- nslookup kubernetes.default: (10.209552408s)
--- PASS: TestNetworkPlugins/group/auto/DNS (26.23s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-170065 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.47s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-170065 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-170065 --alsologtostderr -v=5: (1.466705003s)
--- PASS: TestPause/serial/PauseAgain (1.47s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.19s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-170065 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-170065 --alsologtostderr -v=5: (1.193164305s)
--- PASS: TestPause/serial/DeletePaused (1.19s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (107.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m47.759775294s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (107.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mfvsz" [1bdc2299-fb5c-4f16-b59c-43b4df680790] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.023336485s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-565009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-565009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kxgx6" [a509a742-004d-4fb6-baaf-02228aec95dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kxgx6" [a509a742-004d-4fb6-baaf-02228aec95dc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.013126162s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-565009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (97.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m37.800998949s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (97.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (134.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m14.20777918s)
--- PASS: TestNetworkPlugins/group/flannel/Start (134.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-c4fkl" [6f6f27c5-a69e-4182-8543-910ebe6d3526] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.028748001s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-565009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-565009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xw56r" [3f7624cb-ff46-4cd7-b0f3-f94f0817a65c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xw56r" [3f7624cb-ff46-4cd7-b0f3-f94f0817a65c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.015000137s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-565009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-565009 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m28.503206151s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-565009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-565009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fxdb8" [9f221689-9cb3-45be-9734-23697989c1f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fxdb8" [9f221689-9cb3-45be-9734-23697989c1f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.014331395s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-565009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-565009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-565009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h9k5h" [ba6ca70d-9489-4670-ad3d-eee4b84207a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h9k5h" [ba6ca70d-9489-4670-ad3d-eee4b84207a7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.030751798s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (136.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-883154 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-883154 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m16.535562133s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (136.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-565009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-488211 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-488211 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m30.880210343s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gdnz8" [db43123f-ab85-4b2a-a739-218ec950c603] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.035048138s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-565009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-565009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5bbq4" [cd1ab95d-13c3-4d69-8976-4cd71096060f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5bbq4" [cd1ab95d-13c3-4d69-8976-4cd71096060f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.037534802s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-565009 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-565009 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6ggsj" [b142e2bb-8184-4a90-ba3f-e1e55d18dcf1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1109 00:28:04.432143  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-6ggsj" [b142e2bb-8184-4a90-ba3f-e1e55d18dcf1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.019023397s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-565009 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-565009 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-565009 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.190002863s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-565009 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-565009 exec deployment/netcat -- nslookup kubernetes.default: (5.229400384s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (129.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-021465 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-021465 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (2m9.642471691s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (129.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-565009 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-530471 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
E1109 00:28:52.927007  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-530471 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m37.644082834s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-488211 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [65ceec8d-be5c-4e9a-ad07-b9e3253235f3] Pending
helpers_test.go:344: "busybox" [65ceec8d-be5c-4e9a-ad07-b9e3253235f3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [65ceec8d-be5c-4e9a-ad07-b9e3253235f3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.043692329s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-488211 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-488211 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-488211 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.19273456s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-488211 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-488211 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-488211 --alsologtostderr -v=3: (1m32.8980677s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-883154 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [526d8703-aa23-40df-88c3-f3ffae068d84] Pending
helpers_test.go:344: "busybox" [526d8703-aa23-40df-88c3-f3ffae068d84] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [526d8703-aa23-40df-88c3-f3ffae068d84] Running
E1109 00:29:29.746701  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:29:29.752080  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:29:29.762534  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:29:29.782845  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:29:29.823237  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:29:29.903965  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:29:30.064874  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:29:30.385895  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:29:31.026641  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:29:32.307764  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.386044814s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-883154 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-883154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1109 00:29:34.868866  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-883154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.121685001s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-883154 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-883154 --alsologtostderr -v=3
E1109 00:29:39.990012  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:29:50.230425  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:30:03.360901  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:03.366510  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:03.376812  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:03.397204  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:03.437559  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:03.517889  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:03.679079  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:03.999965  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:04.640865  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:05.922070  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:08.482675  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:10.710686  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:30:13.603676  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:30:23.687043  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1109 00:30:23.844402  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-883154 --alsologtostderr -v=3: (1m32.995594392s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (93.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-530471 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4a187a92-28e7-4b43-9b80-2647602881b7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4a187a92-28e7-4b43-9b80-2647602881b7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.029584593s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-530471 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-021465 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3e6bf9e0-cbd5-4745-be8b-6cc9f593dafd] Pending
helpers_test.go:344: "busybox" [3e6bf9e0-cbd5-4745-be8b-6cc9f593dafd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3e6bf9e0-cbd5-4745-be8b-6cc9f593dafd] Running
E1109 00:30:44.325318  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.036621815s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-021465 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-530471 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-530471 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.134433447s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-530471 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-530471 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-530471 --alsologtostderr -v=3: (1m31.884265112s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-021465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-021465 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.18206308s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-021465 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-021465 --alsologtostderr -v=3
E1109 00:30:51.670988  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-021465 --alsologtostderr -v=3: (1m32.400911656s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-488211 -n no-preload-488211
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-488211 -n no-preload-488211: exit status 7 (86.797515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-488211 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (307.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-488211 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
E1109 00:30:55.711454  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:30:55.716799  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:30:55.727263  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:30:55.747668  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:30:55.788045  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:30:55.868425  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:30:56.029482  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:30:56.350457  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:30:56.990920  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:30:58.272081  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:31:00.832397  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:31:05.953072  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-488211 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m7.49126785s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-488211 -n no-preload-488211
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (307.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-883154 -n old-k8s-version-883154
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-883154 -n old-k8s-version-883154: exit status 7 (88.262486ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-883154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (348.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-883154 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E1109 00:31:16.193575  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:31:25.286494  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:31:35.725360  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:35.730762  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:35.741095  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:35.761434  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:35.801843  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:35.882259  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:36.042727  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:36.363871  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:36.674704  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:31:37.004514  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:38.285170  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:40.845570  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:45.966020  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:31:56.206298  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:32:08.494668  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:08.500030  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:08.510504  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:08.530844  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:08.571252  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:08.651701  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:08.812627  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:09.133660  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:09.774790  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:11.054952  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-883154 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (5m47.995379675s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-883154 -n old-k8s-version-883154
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (348.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-530471 -n default-k8s-diff-port-530471
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-530471 -n default-k8s-diff-port-530471: exit status 7 (106.534069ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-530471 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (311.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-530471 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
E1109 00:32:13.591255  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:32:13.615513  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:16.687088  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:32:17.635536  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:32:18.736393  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-530471 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m11.625551434s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-530471 -n default-k8s-diff-port-530471
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (311.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021465 -n embed-certs-021465
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021465 -n embed-certs-021465: exit status 7 (97.335989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-021465 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (344.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-021465 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
E1109 00:32:28.977046  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:47.207454  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:32:47.486281  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1109 00:32:49.457632  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:32:54.258887  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:32:54.264262  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:32:54.274964  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:32:54.295385  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:32:54.335907  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:32:54.416711  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:32:54.577267  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:32:54.897880  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:32:55.538918  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:32:56.820199  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:32:57.647356  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:32:59.380675  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:33:02.528022  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:02.533396  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:02.543772  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:02.564194  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:02.604594  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:02.685103  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:02.845493  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:03.166277  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:03.807064  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:04.431860  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
E1109 00:33:04.501172  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:33:05.088161  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:07.648958  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:12.769741  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:14.742055  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:33:23.010315  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:30.418218  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:33:35.223040  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:33:39.555888  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:33:43.490598  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:33:52.926285  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/ingress-addon-legacy-856841/client.crt: no such file or directory
E1109 00:34:16.184070  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:34:19.567959  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
E1109 00:34:24.451143  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:34:29.746835  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:34:52.338634  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
E1109 00:34:57.432314  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/auto-565009/client.crt: no such file or directory
E1109 00:35:03.360409  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:35:23.686955  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/functional-400359/client.crt: no such file or directory
E1109 00:35:31.048479  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/kindnet-565009/client.crt: no such file or directory
E1109 00:35:38.104837  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:35:46.372163  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:35:55.711513  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-021465 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m43.91316288s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-021465 -n embed-certs-021465
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (344.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xq7qc" [3f5ed819-51a2-4813-8649-555b3232abc4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025139473s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xq7qc" [3f5ed819-51a2-4813-8649-555b3232abc4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018019257s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-488211 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-488211 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-488211 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-488211 -n no-preload-488211
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-488211 -n no-preload-488211: exit status 2 (288.352654ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-488211 -n no-preload-488211
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-488211 -n no-preload-488211: exit status 2 (289.63173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-488211 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-488211 -n no-preload-488211
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-488211 -n no-preload-488211
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (83.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-158499 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
E1109 00:36:23.396906  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/calico-565009/client.crt: no such file or directory
E1109 00:36:35.726275  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-158499 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m23.19096387s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (83.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-zrn7t" [394e02f9-49cf-44e7-9af6-959ca5488ed5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.021286874s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-zrn7t" [394e02f9-49cf-44e7-9af6-959ca5488ed5] Running
E1109 00:37:03.408849  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/custom-flannel-565009/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013495281s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-883154 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-883154 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-883154 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-883154 -n old-k8s-version-883154
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-883154 -n old-k8s-version-883154: exit status 2 (291.996337ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-883154 -n old-k8s-version-883154
E1109 00:37:08.494507  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-883154 -n old-k8s-version-883154: exit status 2 (291.050597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-883154 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-883154 -n old-k8s-version-883154
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-883154 -n old-k8s-version-883154
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5h8sm" [aeb6277b-6173-4760-b36e-d1e26dbb5049] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024548024s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-5h8sm" [aeb6277b-6173-4760-b36e-d1e26dbb5049] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014064297s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-530471 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-530471 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-530471 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-530471 -n default-k8s-diff-port-530471
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-530471 -n default-k8s-diff-port-530471: exit status 2 (286.772915ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-530471 -n default-k8s-diff-port-530471
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-530471 -n default-k8s-diff-port-530471: exit status 2 (290.213251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-530471 --alsologtostderr -v=1
E1109 00:37:36.179184  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/enable-default-cni-565009/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-530471 -n default-k8s-diff-port-530471
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-530471 -n default-k8s-diff-port-530471
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-158499 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-158499 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.457788413s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-158499 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-158499 --alsologtostderr -v=3: (7.131049193s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-158499 -n newest-cni-158499
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-158499 -n newest-cni-158499: exit status 7 (92.956185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-158499 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (49.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-158499 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3
E1109 00:37:54.258233  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/flannel-565009/client.crt: no such file or directory
E1109 00:38:02.529726  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/bridge-565009/client.crt: no such file or directory
E1109 00:38:04.431774  208963 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17586-201782/.minikube/profiles/addons-040821/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-158499 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.3: (49.682663728s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-158499 -n newest-cni-158499
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (49.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-82wcz" [9b003fa8-1f01-4b45-97d5-e8dbd1a3158a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020206662s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-82wcz" [9b003fa8-1f01-4b45-97d5-e8dbd1a3158a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014814733s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-021465 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-021465 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-021465 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-021465 -n embed-certs-021465
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-021465 -n embed-certs-021465: exit status 2 (273.336908ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-021465 -n embed-certs-021465
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-021465 -n embed-certs-021465: exit status 2 (266.038595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-021465 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-021465 -n embed-certs-021465
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-021465 -n embed-certs-021465
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-158499 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-158499 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-158499 -n newest-cni-158499
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-158499 -n newest-cni-158499: exit status 2 (276.108844ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-158499 -n newest-cni-158499
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-158499 -n newest-cni-158499: exit status 2 (272.309334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-158499 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-158499 -n newest-cni-158499
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-158499 -n newest-cni-158499
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                    

Test skip (36/306)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.3/cached-images 0
13 TestDownloadOnly/v1.28.3/binaries 0
14 TestDownloadOnly/v1.28.3/kubectl 0
18 TestDownloadOnlyKic 0
32 TestAddons/parallel/Olm 0
44 TestDockerFlags 0
47 TestDockerEnvContainerd 0
49 TestHyperKitDriverInstallOrUpdate 0
50 TestHyperkitDriverSkipUpgrade 0
101 TestFunctional/parallel/DockerEnv 0
102 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
150 TestGvisorAddon 0
151 TestImageBuild 0
184 TestKicCustomNetwork 0
185 TestKicExistingNetwork 0
186 TestKicCustomSubnet 0
187 TestKicStaticIP 0
218 TestChangeNoneUser 0
221 TestScheduledStopWindows 0
223 TestSkaffold 0
225 TestInsufficientStorage 0
229 TestMissingContainerUpgrade 0
234 TestNetworkPlugins/group/kubenet 3.75
243 TestNetworkPlugins/group/cilium 4.06
258 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-565009 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-565009" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-565009" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-565009

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-565009"

                                                
                                                
----------------------- debugLogs end: kubenet-565009 [took: 3.580321717s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-565009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-565009
--- SKIP: TestNetworkPlugins/group/kubenet (3.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-565009 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-565009" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-565009

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-565009" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-565009"

                                                
                                                
----------------------- debugLogs end: cilium-565009 [took: 3.895463777s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-565009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-565009
--- SKIP: TestNetworkPlugins/group/cilium (4.06s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-558283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-558283
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard